00:00:00.001 Started by upstream project "autotest-per-patch" build number 126163 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23912 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.134 Using shallow fetch with depth 1 00:00:00.134 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.134 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.189 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/71/24171/1 # timeout=5 00:00:05.278 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.292 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.305 Checking out Revision f574307dba849e7d22dd5631ce9e594362bd2ebc (FETCH_HEAD) 00:00:05.305 > git config core.sparsecheckout # timeout=10 00:00:05.315 > git read-tree -mu HEAD # timeout=10 00:00:05.334 > git checkout -f f574307dba849e7d22dd5631ce9e594362bd2ebc # timeout=5 00:00:05.356 Commit message: "packer: Drop centos7" 00:00:05.356 > git rev-list --no-walk 055051402f6bd793109ccc450ac2f885bb0fdaeb # timeout=10 00:00:05.474 [Pipeline] Start of Pipeline 00:00:05.491 [Pipeline] library 00:00:05.492 Loading library shm_lib@master 00:00:05.493 Library shm_lib@master is cached. Copying from home. 00:00:05.510 [Pipeline] node 00:00:05.519 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.521 [Pipeline] { 00:00:05.531 [Pipeline] catchError 00:00:05.533 [Pipeline] { 00:00:05.543 [Pipeline] wrap 00:00:05.549 [Pipeline] { 00:00:05.554 [Pipeline] stage 00:00:05.555 [Pipeline] { (Prologue) 00:00:05.748 [Pipeline] sh 00:00:06.024 + logger -p user.info -t JENKINS-CI 00:00:06.042 [Pipeline] echo 00:00:06.043 Node: GP6 00:00:06.050 [Pipeline] sh 00:00:06.345 [Pipeline] setCustomBuildProperty 00:00:06.358 [Pipeline] echo 00:00:06.359 Cleanup processes 00:00:06.365 [Pipeline] sh 00:00:06.646 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.646 990294 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.665 [Pipeline] sh 00:00:06.955 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.955 ++ awk '{print $1}' 00:00:06.955 ++ grep -v 'sudo pgrep' 00:00:06.955 + sudo kill -9 00:00:06.955 + true 00:00:06.972 [Pipeline] cleanWs 00:00:06.982 [WS-CLEANUP] Deleting project workspace... 00:00:06.982 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.990 [WS-CLEANUP] done 00:00:06.994 [Pipeline] setCustomBuildProperty 00:00:07.008 [Pipeline] sh 00:00:07.290 + sudo git config --global --replace-all safe.directory '*' 00:00:07.395 [Pipeline] httpRequest 00:00:07.417 [Pipeline] echo 00:00:07.419 Sorcerer 10.211.164.101 is alive 00:00:07.428 [Pipeline] httpRequest 00:00:07.432 HttpMethod: GET 00:00:07.433 URL: http://10.211.164.101/packages/jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:00:07.434 Sending request to url: http://10.211.164.101/packages/jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:00:07.437 Response Code: HTTP/1.1 200 OK 00:00:07.438 Success: Status code 200 is in the accepted range: 200,404 00:00:07.438 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:00:08.127 [Pipeline] sh 00:00:08.408 + tar --no-same-owner -xf jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:00:08.424 [Pipeline] httpRequest 00:00:08.447 [Pipeline] echo 00:00:08.449 Sorcerer 10.211.164.101 is alive 00:00:08.456 [Pipeline] httpRequest 00:00:08.460 HttpMethod: GET 00:00:08.461 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.462 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.466 Response Code: HTTP/1.1 200 OK 00:00:08.466 Success: Status code 200 is in the accepted range: 200,404 00:00:08.467 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:27.301 [Pipeline] sh 00:00:27.586 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:30.135 [Pipeline] sh 00:00:30.420 + git -C spdk log --oneline -n5 00:00:30.420 719d03c6a sock/uring: only register net impl if supported 00:00:30.420 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:30.420 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:30.420 6c7c1f57e accel: add sequence outstanding stat 00:00:30.420 3bc8e6a26 accel: add utility to put task 00:00:30.433 [Pipeline] } 00:00:30.452 [Pipeline] // stage 00:00:30.462 [Pipeline] stage 00:00:30.464 [Pipeline] { (Prepare) 00:00:30.482 [Pipeline] writeFile 00:00:30.502 [Pipeline] sh 00:00:30.786 + logger -p user.info -t JENKINS-CI 00:00:30.800 [Pipeline] sh 00:00:31.085 + logger -p user.info -t JENKINS-CI 00:00:31.099 [Pipeline] sh 00:00:31.384 + cat autorun-spdk.conf 00:00:31.384 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.384 SPDK_TEST_NVMF=1 00:00:31.384 SPDK_TEST_NVME_CLI=1 00:00:31.384 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.384 SPDK_TEST_NVMF_NICS=e810 00:00:31.384 SPDK_TEST_VFIOUSER=1 00:00:31.384 SPDK_RUN_UBSAN=1 00:00:31.384 NET_TYPE=phy 00:00:31.391 RUN_NIGHTLY=0 00:00:31.397 [Pipeline] readFile 00:00:31.433 [Pipeline] withEnv 00:00:31.435 [Pipeline] { 00:00:31.450 [Pipeline] sh 00:00:31.770 + set -ex 00:00:31.770 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:31.770 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.770 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.770 ++ SPDK_TEST_NVMF=1 00:00:31.770 ++ SPDK_TEST_NVME_CLI=1 00:00:31.770 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.770 ++ SPDK_TEST_NVMF_NICS=e810 00:00:31.770 ++ SPDK_TEST_VFIOUSER=1 00:00:31.770 ++ SPDK_RUN_UBSAN=1 00:00:31.770 ++ NET_TYPE=phy 00:00:31.770 ++ RUN_NIGHTLY=0 00:00:31.770 + case $SPDK_TEST_NVMF_NICS in 00:00:31.770 + DRIVERS=ice 00:00:31.770 + [[ tcp == \r\d\m\a ]] 00:00:31.770 + [[ -n ice ]] 00:00:31.770 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:31.770 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:31.770 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:31.770 rmmod: ERROR: Module irdma is not currently loaded 00:00:31.770 rmmod: ERROR: Module i40iw is not currently loaded 00:00:31.770 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:31.770 + true 00:00:31.770 + for D in $DRIVERS 00:00:31.770 + sudo modprobe ice 00:00:31.770 + exit 0 00:00:31.782 [Pipeline] } 00:00:31.801 [Pipeline] // withEnv 00:00:31.809 [Pipeline] } 00:00:31.829 [Pipeline] // stage 00:00:31.840 [Pipeline] catchError 00:00:31.841 [Pipeline] { 00:00:31.856 [Pipeline] timeout 00:00:31.856 Timeout set to expire in 50 min 00:00:31.858 [Pipeline] { 00:00:31.873 [Pipeline] stage 00:00:31.876 [Pipeline] { (Tests) 00:00:31.890 [Pipeline] sh 00:00:32.175 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.175 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.175 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.175 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:32.175 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.175 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.175 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:32.175 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.175 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.175 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.175 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:32.175 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.175 + source /etc/os-release 00:00:32.175 ++ NAME='Fedora Linux' 00:00:32.175 ++ VERSION='38 (Cloud Edition)' 00:00:32.175 ++ ID=fedora 00:00:32.175 ++ VERSION_ID=38 00:00:32.175 ++ VERSION_CODENAME= 00:00:32.175 ++ PLATFORM_ID=platform:f38 00:00:32.175 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.175 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.175 ++ LOGO=fedora-logo-icon 00:00:32.175 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.175 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.175 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.175 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.175 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.175 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.175 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.175 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.175 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.175 ++ SUPPORT_END=2024-05-14 00:00:32.175 ++ VARIANT='Cloud Edition' 00:00:32.175 ++ VARIANT_ID=cloud 00:00:32.175 + uname -a 00:00:32.175 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.175 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.113 Hugepages 00:00:33.113 node hugesize free / total 00:00:33.113 node0 1048576kB 0 / 0 00:00:33.113 node0 2048kB 0 / 0 00:00:33.113 node1 1048576kB 0 / 0 00:00:33.113 node1 2048kB 0 / 0 00:00:33.113 00:00:33.113 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.113 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:33.113 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:33.113 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:33.113 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:33.113 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:33.113 + rm -f /tmp/spdk-ld-path 00:00:33.371 + source autorun-spdk.conf 00:00:33.371 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.371 ++ SPDK_TEST_NVMF=1 00:00:33.371 ++ SPDK_TEST_NVME_CLI=1 00:00:33.371 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.371 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.371 ++ SPDK_TEST_VFIOUSER=1 00:00:33.371 ++ SPDK_RUN_UBSAN=1 00:00:33.371 ++ NET_TYPE=phy 00:00:33.371 ++ RUN_NIGHTLY=0 00:00:33.371 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.371 + [[ -n '' ]] 00:00:33.371 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.371 + for M in /var/spdk/build-*-manifest.txt 00:00:33.371 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.371 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.371 + for M in /var/spdk/build-*-manifest.txt 00:00:33.371 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.371 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.371 ++ uname 00:00:33.371 + [[ Linux == \L\i\n\u\x ]] 00:00:33.371 + sudo dmesg -T 00:00:33.371 + sudo dmesg --clear 00:00:33.371 + dmesg_pid=990967 00:00:33.371 + [[ Fedora Linux == FreeBSD ]] 00:00:33.371 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.371 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.371 + sudo dmesg -Tw 00:00:33.371 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.371 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.371 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.371 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.371 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.371 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.371 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.371 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.371 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.371 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.371 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.371 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.371 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.371 Test configuration: 00:00:33.371 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.371 SPDK_TEST_NVMF=1 00:00:33.371 SPDK_TEST_NVME_CLI=1 00:00:33.371 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.371 SPDK_TEST_NVMF_NICS=e810 00:00:33.371 SPDK_TEST_VFIOUSER=1 00:00:33.371 SPDK_RUN_UBSAN=1 00:00:33.371 NET_TYPE=phy 00:00:33.371 RUN_NIGHTLY=0 10:17:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:33.371 10:17:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.371 10:17:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.371 10:17:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.371 10:17:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.372 10:17:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.372 10:17:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.372 10:17:21 -- paths/export.sh@5 -- $ export PATH 00:00:33.372 10:17:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.372 10:17:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:33.372 10:17:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:33.372 10:17:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721031441.XXXXXX 00:00:33.372 10:17:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721031441.2AIo2E 00:00:33.372 10:17:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:33.372 10:17:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:33.372 10:17:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:33.372 10:17:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.372 10:17:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.372 10:17:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:33.372 10:17:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:33.372 10:17:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.372 10:17:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.372 10:17:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:33.372 10:17:21 -- pm/common@17 -- $ local monitor 00:00:33.372 10:17:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.372 10:17:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.372 10:17:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.372 10:17:21 -- pm/common@21 -- $ date +%s 00:00:33.372 10:17:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.372 10:17:21 -- pm/common@21 -- $ date +%s 00:00:33.372 10:17:21 -- pm/common@25 -- $ sleep 1 00:00:33.372 10:17:21 -- pm/common@21 -- $ date +%s 00:00:33.372 10:17:21 -- pm/common@21 -- $ date +%s 00:00:33.372 10:17:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031441 00:00:33.372 10:17:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031441 00:00:33.372 10:17:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031441 00:00:33.372 10:17:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031441 00:00:33.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031441_collect-vmstat.pm.log 00:00:33.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031441_collect-cpu-load.pm.log 00:00:33.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031441_collect-cpu-temp.pm.log 00:00:33.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031441_collect-bmc-pm.bmc.pm.log 00:00:34.309 10:17:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:34.309 10:17:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.309 10:17:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.309 10:17:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.309 10:17:22 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.309 Mon Jul 15 08:17:22 AM UTC 2024 00:00:34.309 10:17:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.309 v24.09-pre-202-g719d03c6a 00:00:34.309 10:17:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.309 10:17:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.309 10:17:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.309 10:17:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:34.309 10:17:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.309 10:17:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.309 ************************************ 00:00:34.309 START TEST ubsan 00:00:34.309 ************************************ 00:00:34.309 10:17:22 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:34.309 using ubsan 00:00:34.309 00:00:34.309 real 0m0.000s 00:00:34.309 user 0m0.000s 00:00:34.309 sys 0m0.000s 00:00:34.309 10:17:22 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:34.309 10:17:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.309 ************************************ 00:00:34.309 END TEST ubsan 00:00:34.309 ************************************ 00:00:34.567 10:17:22 -- common/autotest_common.sh@1142 -- $ return 0 00:00:34.567 10:17:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.567 10:17:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.567 10:17:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.567 10:17:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.567 10:17:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.567 10:17:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.567 10:17:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.567 10:17:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.567 10:17:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.567 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.567 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:34.836 Using 'verbs' RDMA provider 00:00:45.370 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:55.340 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:55.340 Creating mk/config.mk...done. 00:00:55.340 Creating mk/cc.flags.mk...done. 00:00:55.340 Type 'make' to build. 00:00:55.340 10:17:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:55.340 10:17:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:55.340 10:17:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:55.340 10:17:43 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.340 ************************************ 00:00:55.340 START TEST make 00:00:55.340 ************************************ 00:00:55.340 10:17:43 make -- common/autotest_common.sh@1123 -- $ make -j48 00:00:55.340 make[1]: Nothing to be done for 'all'. 00:00:57.252 The Meson build system 00:00:57.252 Version: 1.3.1 00:00:57.252 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:57.252 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:57.252 Build type: native build 00:00:57.252 Project name: libvfio-user 00:00:57.252 Project version: 0.0.1 00:00:57.252 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:57.252 C linker for the host machine: cc ld.bfd 2.39-16 00:00:57.252 Host machine cpu family: x86_64 00:00:57.252 Host machine cpu: x86_64 00:00:57.252 Run-time dependency threads found: YES 00:00:57.252 Library dl found: YES 00:00:57.252 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:57.252 Run-time dependency json-c found: YES 0.17 00:00:57.252 Run-time dependency cmocka found: YES 1.1.7 00:00:57.252 Program pytest-3 found: NO 00:00:57.252 Program flake8 found: NO 00:00:57.252 Program misspell-fixer found: NO 00:00:57.252 Program restructuredtext-lint found: NO 00:00:57.252 Program valgrind found: YES (/usr/bin/valgrind) 00:00:57.252 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:57.252 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:57.252 Compiler for C supports arguments -Wwrite-strings: YES 00:00:57.252 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.252 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:57.252 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:57.252 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.252 Build targets in project: 8 00:00:57.252 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:57.252 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:57.252 00:00:57.252 libvfio-user 0.0.1 00:00:57.252 00:00:57.252 User defined options 00:00:57.252 buildtype : debug 00:00:57.252 default_library: shared 00:00:57.252 libdir : /usr/local/lib 00:00:57.252 00:00:57.252 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:57.825 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:57.826 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:57.826 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:57.826 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:57.826 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:57.826 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:57.826 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:57.826 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:57.826 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:57.826 [9/37] Compiling C object samples/null.p/null.c.o 00:00:57.826 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:57.826 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:57.826 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:57.826 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:58.089 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:58.089 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:58.089 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:58.089 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:58.089 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:58.089 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:58.089 [20/37] Compiling C object samples/server.p/server.c.o 00:00:58.089 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:58.089 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:58.089 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:58.089 [24/37] Compiling C object samples/client.p/client.c.o 00:00:58.089 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:58.089 [26/37] Linking target samples/client 00:00:58.089 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:58.089 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:58.089 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:58.349 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:58.349 [31/37] Linking target test/unit_tests 00:00:58.349 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:58.349 [33/37] Linking target samples/lspci 00:00:58.349 [34/37] Linking target samples/shadow_ioeventfd_server 00:00:58.349 [35/37] Linking target samples/server 00:00:58.349 [36/37] Linking target samples/gpio-pci-idio-16 00:00:58.349 [37/37] Linking target samples/null 00:00:58.349 INFO: autodetecting backend as ninja 00:00:58.349 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.612 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.186 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.186 ninja: no work to do. 00:01:04.451 The Meson build system 00:01:04.451 Version: 1.3.1 00:01:04.451 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:04.451 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:04.451 Build type: native build 00:01:04.451 Program cat found: YES (/usr/bin/cat) 00:01:04.451 Project name: DPDK 00:01:04.451 Project version: 24.03.0 00:01:04.451 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:04.451 C linker for the host machine: cc ld.bfd 2.39-16 00:01:04.451 Host machine cpu family: x86_64 00:01:04.451 Host machine cpu: x86_64 00:01:04.451 Message: ## Building in Developer Mode ## 00:01:04.451 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:04.451 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:04.451 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:04.451 Program python3 found: YES (/usr/bin/python3) 00:01:04.451 Program cat found: YES (/usr/bin/cat) 00:01:04.451 Compiler for C supports arguments -march=native: YES 00:01:04.451 Checking for size of "void *" : 8 00:01:04.451 Checking for size of "void *" : 8 (cached) 00:01:04.451 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:04.451 Library m found: YES 00:01:04.451 Library numa found: YES 00:01:04.451 Has header "numaif.h" : YES 00:01:04.451 Library fdt found: NO 00:01:04.451 Library execinfo found: NO 00:01:04.451 Has header "execinfo.h" : YES 00:01:04.451 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:04.451 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:04.451 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:04.451 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:04.451 Run-time dependency openssl found: YES 3.0.9 00:01:04.451 Run-time dependency libpcap found: YES 1.10.4 00:01:04.451 Has header "pcap.h" with dependency libpcap: YES 00:01:04.451 Compiler for C supports arguments -Wcast-qual: YES 00:01:04.451 Compiler for C supports arguments -Wdeprecated: YES 00:01:04.451 Compiler for C supports arguments -Wformat: YES 00:01:04.451 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:04.451 Compiler for C supports arguments -Wformat-security: NO 00:01:04.451 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:04.451 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:04.451 Compiler for C supports arguments -Wnested-externs: YES 00:01:04.451 Compiler for C supports arguments -Wold-style-definition: YES 00:01:04.451 Compiler for C supports arguments -Wpointer-arith: YES 00:01:04.451 Compiler for C supports arguments -Wsign-compare: YES 00:01:04.451 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:04.451 Compiler for C supports arguments -Wundef: YES 00:01:04.451 Compiler for C supports arguments -Wwrite-strings: YES 00:01:04.451 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:04.451 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:04.451 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:04.451 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:04.451 Program objdump found: YES (/usr/bin/objdump) 00:01:04.451 Compiler for C supports arguments -mavx512f: YES 00:01:04.451 Checking if "AVX512 checking" compiles: YES 00:01:04.451 Fetching value of define "__SSE4_2__" : 1 00:01:04.451 Fetching value of define "__AES__" : 1 00:01:04.451 Fetching value of define "__AVX__" : 1 00:01:04.451 Fetching value of define "__AVX2__" : (undefined) 00:01:04.451 Fetching value of define "__AVX512BW__" : (undefined) 00:01:04.451 Fetching value of define "__AVX512CD__" : (undefined) 00:01:04.451 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:04.451 Fetching value of define "__AVX512F__" : (undefined) 00:01:04.451 Fetching value of define "__AVX512VL__" : (undefined) 00:01:04.451 Fetching value of define "__PCLMUL__" : 1 00:01:04.451 Fetching value of define "__RDRND__" : 1 00:01:04.451 Fetching value of define "__RDSEED__" : (undefined) 00:01:04.451 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:04.451 Fetching value of define "__znver1__" : (undefined) 00:01:04.451 Fetching value of define "__znver2__" : (undefined) 00:01:04.451 Fetching value of define "__znver3__" : (undefined) 00:01:04.451 Fetching value of define "__znver4__" : (undefined) 00:01:04.451 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:04.451 Message: lib/log: Defining dependency "log" 00:01:04.451 Message: lib/kvargs: Defining dependency "kvargs" 00:01:04.451 Message: lib/telemetry: Defining dependency "telemetry" 00:01:04.451 Checking for function "getentropy" : NO 00:01:04.451 Message: lib/eal: Defining dependency "eal" 00:01:04.451 Message: lib/ring: Defining dependency "ring" 00:01:04.451 Message: lib/rcu: Defining dependency "rcu" 00:01:04.451 Message: lib/mempool: Defining dependency "mempool" 00:01:04.451 Message: lib/mbuf: Defining dependency "mbuf" 00:01:04.451 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:04.451 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:04.451 Compiler for C supports arguments -mpclmul: YES 00:01:04.451 Compiler for C supports arguments -maes: YES 00:01:04.451 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.451 Compiler for C supports arguments -mavx512bw: YES 00:01:04.451 Compiler for C supports arguments -mavx512dq: YES 00:01:04.451 Compiler for C supports arguments -mavx512vl: YES 00:01:04.451 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:04.451 Compiler for C supports arguments -mavx2: YES 00:01:04.451 Compiler for C supports arguments -mavx: YES 00:01:04.451 Message: lib/net: Defining dependency "net" 00:01:04.451 Message: lib/meter: Defining dependency "meter" 00:01:04.451 Message: lib/ethdev: Defining dependency "ethdev" 00:01:04.451 Message: lib/pci: Defining dependency "pci" 00:01:04.451 Message: lib/cmdline: Defining dependency "cmdline" 00:01:04.451 Message: lib/hash: Defining dependency "hash" 00:01:04.451 Message: lib/timer: Defining dependency "timer" 00:01:04.451 Message: lib/compressdev: Defining dependency "compressdev" 00:01:04.451 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:04.451 Message: lib/dmadev: Defining dependency "dmadev" 00:01:04.451 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:04.451 Message: lib/power: Defining dependency "power" 00:01:04.451 Message: lib/reorder: Defining dependency "reorder" 00:01:04.451 Message: lib/security: Defining dependency "security" 00:01:04.451 Has header "linux/userfaultfd.h" : YES 00:01:04.451 Has header "linux/vduse.h" : YES 00:01:04.451 Message: lib/vhost: Defining dependency "vhost" 00:01:04.451 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:04.451 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:04.451 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:04.451 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:04.451 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:04.451 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:04.451 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:04.452 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:04.452 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:04.452 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:04.452 Program doxygen found: YES (/usr/bin/doxygen) 00:01:04.452 Configuring doxy-api-html.conf using configuration 00:01:04.452 Configuring doxy-api-man.conf using configuration 00:01:04.452 Program mandb found: YES (/usr/bin/mandb) 00:01:04.452 Program sphinx-build found: NO 00:01:04.452 Configuring rte_build_config.h using configuration 00:01:04.452 Message: 00:01:04.452 ================= 00:01:04.452 Applications Enabled 00:01:04.452 ================= 00:01:04.452 00:01:04.452 apps: 00:01:04.452 00:01:04.452 00:01:04.452 Message: 00:01:04.452 ================= 00:01:04.452 Libraries Enabled 00:01:04.452 ================= 00:01:04.452 00:01:04.452 libs: 00:01:04.452 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:04.452 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:04.452 cryptodev, dmadev, power, reorder, security, vhost, 00:01:04.452 00:01:04.452 Message: 00:01:04.452 =============== 00:01:04.452 Drivers Enabled 00:01:04.452 =============== 00:01:04.452 00:01:04.452 common: 00:01:04.452 00:01:04.452 bus: 00:01:04.452 pci, vdev, 00:01:04.452 mempool: 00:01:04.452 ring, 00:01:04.452 dma: 00:01:04.452 00:01:04.452 net: 00:01:04.452 00:01:04.452 crypto: 00:01:04.452 00:01:04.452 compress: 00:01:04.452 00:01:04.452 vdpa: 00:01:04.452 00:01:04.452 00:01:04.452 Message: 00:01:04.452 ================= 00:01:04.452 Content Skipped 00:01:04.452 ================= 00:01:04.452 00:01:04.452 apps: 00:01:04.452 dumpcap: explicitly disabled via build config 00:01:04.452 graph: explicitly disabled via build config 00:01:04.452 pdump: explicitly disabled via build config 00:01:04.452 proc-info: explicitly disabled via build config 00:01:04.452 test-acl: explicitly disabled via build config 00:01:04.452 test-bbdev: explicitly disabled via build config 00:01:04.452 test-cmdline: explicitly disabled via build config 00:01:04.452 test-compress-perf: explicitly disabled via build config 00:01:04.452 test-crypto-perf: explicitly disabled via build config 00:01:04.452 test-dma-perf: explicitly disabled via build config 00:01:04.452 test-eventdev: explicitly disabled via build config 00:01:04.452 test-fib: explicitly disabled via build config 00:01:04.452 test-flow-perf: explicitly disabled via build config 00:01:04.452 test-gpudev: explicitly disabled via build config 00:01:04.452 test-mldev: explicitly disabled via build config 00:01:04.452 test-pipeline: explicitly disabled via build config 00:01:04.452 test-pmd: explicitly disabled via build config 00:01:04.452 test-regex: explicitly disabled via build config 00:01:04.452 test-sad: explicitly disabled via build config 00:01:04.452 test-security-perf: explicitly disabled via build config 00:01:04.452 00:01:04.452 libs: 00:01:04.452 argparse: explicitly disabled via build config 00:01:04.452 metrics: explicitly disabled via build config 00:01:04.452 acl: explicitly disabled via build config 00:01:04.452 bbdev: explicitly disabled via build config 00:01:04.452 bitratestats: explicitly disabled via build config 00:01:04.452 bpf: explicitly disabled via build config 00:01:04.452 cfgfile: explicitly disabled via build config 00:01:04.452 distributor: explicitly disabled via build config 00:01:04.452 efd: explicitly disabled via build config 00:01:04.452 eventdev: explicitly disabled via build config 00:01:04.452 dispatcher: explicitly disabled via build config 00:01:04.452 gpudev: explicitly disabled via build config 00:01:04.452 gro: explicitly disabled via build config 00:01:04.452 gso: explicitly disabled via build config 00:01:04.452 ip_frag: explicitly disabled via build config 00:01:04.452 jobstats: explicitly disabled via build config 00:01:04.452 latencystats: explicitly disabled via build config 00:01:04.452 lpm: explicitly disabled via build config 00:01:04.452 member: explicitly disabled via build config 00:01:04.452 pcapng: explicitly disabled via build config 00:01:04.452 rawdev: explicitly disabled via build config 00:01:04.452 regexdev: explicitly disabled via build config 00:01:04.452 mldev: explicitly disabled via build config 00:01:04.452 rib: explicitly disabled via build config 00:01:04.452 sched: explicitly disabled via build config 00:01:04.452 stack: explicitly disabled via build config 00:01:04.452 ipsec: explicitly disabled via build config 00:01:04.452 pdcp: explicitly disabled via build config 00:01:04.452 fib: explicitly disabled via build config 00:01:04.452 port: explicitly disabled via build config 00:01:04.452 pdump: explicitly disabled via build config 00:01:04.452 table: explicitly disabled via build config 00:01:04.452 pipeline: explicitly disabled via build config 00:01:04.452 graph: explicitly disabled via build config 00:01:04.452 node: explicitly disabled via build config 00:01:04.452 00:01:04.452 drivers: 00:01:04.452 common/cpt: not in enabled drivers build config 00:01:04.452 common/dpaax: not in enabled drivers build config 00:01:04.452 common/iavf: not in enabled drivers build config 00:01:04.452 common/idpf: not in enabled drivers build config 00:01:04.452 common/ionic: not in enabled drivers build config 00:01:04.452 common/mvep: not in enabled drivers build config 00:01:04.452 common/octeontx: not in enabled drivers build config 00:01:04.452 bus/auxiliary: not in enabled drivers build config 00:01:04.452 bus/cdx: not in enabled drivers build config 00:01:04.452 bus/dpaa: not in enabled drivers build config 00:01:04.452 bus/fslmc: not in enabled drivers build config 00:01:04.452 bus/ifpga: not in enabled drivers build config 00:01:04.452 bus/platform: not in enabled drivers build config 00:01:04.452 bus/uacce: not in enabled drivers build config 00:01:04.452 bus/vmbus: not in enabled drivers build config 00:01:04.452 common/cnxk: not in enabled drivers build config 00:01:04.452 common/mlx5: not in enabled drivers build config 00:01:04.452 common/nfp: not in enabled drivers build config 00:01:04.452 common/nitrox: not in enabled drivers build config 00:01:04.452 common/qat: not in enabled drivers build config 00:01:04.452 common/sfc_efx: not in enabled drivers build config 00:01:04.452 mempool/bucket: not in enabled drivers build config 00:01:04.452 mempool/cnxk: not in enabled drivers build config 00:01:04.452 mempool/dpaa: not in enabled drivers build config 00:01:04.452 mempool/dpaa2: not in enabled drivers build config 00:01:04.452 mempool/octeontx: not in enabled drivers build config 00:01:04.452 mempool/stack: not in enabled drivers build config 00:01:04.452 dma/cnxk: not in enabled drivers build config 00:01:04.452 dma/dpaa: not in enabled drivers build config 00:01:04.452 dma/dpaa2: not in enabled drivers build config 00:01:04.452 dma/hisilicon: not in enabled drivers build config 00:01:04.452 dma/idxd: not in enabled drivers build config 00:01:04.452 dma/ioat: not in enabled drivers build config 00:01:04.452 dma/skeleton: not in enabled drivers build config 00:01:04.452 net/af_packet: not in enabled drivers build config 00:01:04.452 net/af_xdp: not in enabled drivers build config 00:01:04.452 net/ark: not in enabled drivers build config 00:01:04.452 net/atlantic: not in enabled drivers build config 00:01:04.452 net/avp: not in enabled drivers build config 00:01:04.452 net/axgbe: not in enabled drivers build config 00:01:04.452 net/bnx2x: not in enabled drivers build config 00:01:04.452 net/bnxt: not in enabled drivers build config 00:01:04.452 net/bonding: not in enabled drivers build config 00:01:04.452 net/cnxk: not in enabled drivers build config 00:01:04.452 net/cpfl: not in enabled drivers build config 00:01:04.452 net/cxgbe: not in enabled drivers build config 00:01:04.452 net/dpaa: not in enabled drivers build config 00:01:04.452 net/dpaa2: not in enabled drivers build config 00:01:04.452 net/e1000: not in enabled drivers build config 00:01:04.452 net/ena: not in enabled drivers build config 00:01:04.452 net/enetc: not in enabled drivers build config 00:01:04.452 net/enetfec: not in enabled drivers build config 00:01:04.452 net/enic: not in enabled drivers build config 00:01:04.452 net/failsafe: not in enabled drivers build config 00:01:04.452 net/fm10k: not in enabled drivers build config 00:01:04.452 net/gve: not in enabled drivers build config 00:01:04.452 net/hinic: not in enabled drivers build config 00:01:04.452 net/hns3: not in enabled drivers build config 00:01:04.452 net/i40e: not in enabled drivers build config 00:01:04.452 net/iavf: not in enabled drivers build config 00:01:04.452 net/ice: not in enabled drivers build config 00:01:04.452 net/idpf: not in enabled drivers build config 00:01:04.452 net/igc: not in enabled drivers build config 00:01:04.452 net/ionic: not in enabled drivers build config 00:01:04.452 net/ipn3ke: not in enabled drivers build config 00:01:04.452 net/ixgbe: not in enabled drivers build config 00:01:04.452 net/mana: not in enabled drivers build config 00:01:04.452 net/memif: not in enabled drivers build config 00:01:04.452 net/mlx4: not in enabled drivers build config 00:01:04.452 net/mlx5: not in enabled drivers build config 00:01:04.452 net/mvneta: not in enabled drivers build config 00:01:04.452 net/mvpp2: not in enabled drivers build config 00:01:04.452 net/netvsc: not in enabled drivers build config 00:01:04.452 net/nfb: not in enabled drivers build config 00:01:04.452 net/nfp: not in enabled drivers build config 00:01:04.452 net/ngbe: not in enabled drivers build config 00:01:04.452 net/null: not in enabled drivers build config 00:01:04.452 net/octeontx: not in enabled drivers build config 00:01:04.452 net/octeon_ep: not in enabled drivers build config 00:01:04.452 net/pcap: not in enabled drivers build config 00:01:04.452 net/pfe: not in enabled drivers build config 00:01:04.452 net/qede: not in enabled drivers build config 00:01:04.452 net/ring: not in enabled drivers build config 00:01:04.452 net/sfc: not in enabled drivers build config 00:01:04.452 net/softnic: not in enabled drivers build config 00:01:04.452 net/tap: not in enabled drivers build config 00:01:04.452 net/thunderx: not in enabled drivers build config 00:01:04.452 net/txgbe: not in enabled drivers build config 00:01:04.452 net/vdev_netvsc: not in enabled drivers build config 00:01:04.452 net/vhost: not in enabled drivers build config 00:01:04.452 net/virtio: not in enabled drivers build config 00:01:04.452 net/vmxnet3: not in enabled drivers build config 00:01:04.452 raw/*: missing internal dependency, "rawdev" 00:01:04.452 crypto/armv8: not in enabled drivers build config 00:01:04.452 crypto/bcmfs: not in enabled drivers build config 00:01:04.452 crypto/caam_jr: not in enabled drivers build config 00:01:04.452 crypto/ccp: not in enabled drivers build config 00:01:04.452 crypto/cnxk: not in enabled drivers build config 00:01:04.452 crypto/dpaa_sec: not in enabled drivers build config 00:01:04.453 crypto/dpaa2_sec: not in enabled drivers build config 00:01:04.453 crypto/ipsec_mb: not in enabled drivers build config 00:01:04.453 crypto/mlx5: not in enabled drivers build config 00:01:04.453 crypto/mvsam: not in enabled drivers build config 00:01:04.453 crypto/nitrox: not in enabled drivers build config 00:01:04.453 crypto/null: not in enabled drivers build config 00:01:04.453 crypto/octeontx: not in enabled drivers build config 00:01:04.453 crypto/openssl: not in enabled drivers build config 00:01:04.453 crypto/scheduler: not in enabled drivers build config 00:01:04.453 crypto/uadk: not in enabled drivers build config 00:01:04.453 crypto/virtio: not in enabled drivers build config 00:01:04.453 compress/isal: not in enabled drivers build config 00:01:04.453 compress/mlx5: not in enabled drivers build config 00:01:04.453 compress/nitrox: not in enabled drivers build config 00:01:04.453 compress/octeontx: not in enabled drivers build config 00:01:04.453 compress/zlib: not in enabled drivers build config 00:01:04.453 regex/*: missing internal dependency, "regexdev" 00:01:04.453 ml/*: missing internal dependency, "mldev" 00:01:04.453 vdpa/ifc: not in enabled drivers build config 00:01:04.453 vdpa/mlx5: not in enabled drivers build config 00:01:04.453 vdpa/nfp: not in enabled drivers build config 00:01:04.453 vdpa/sfc: not in enabled drivers build config 00:01:04.453 event/*: missing internal dependency, "eventdev" 00:01:04.453 baseband/*: missing internal dependency, "bbdev" 00:01:04.453 gpu/*: missing internal dependency, "gpudev" 00:01:04.453 00:01:04.453 00:01:04.453 Build targets in project: 85 00:01:04.453 00:01:04.453 DPDK 24.03.0 00:01:04.453 00:01:04.453 User defined options 00:01:04.453 buildtype : debug 00:01:04.453 default_library : shared 00:01:04.453 libdir : lib 00:01:04.453 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:04.453 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:04.453 c_link_args : 00:01:04.453 cpu_instruction_set: native 00:01:04.453 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:04.453 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:04.453 enable_docs : false 00:01:04.453 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:04.453 enable_kmods : false 00:01:04.453 max_lcores : 128 00:01:04.453 tests : false 00:01:04.453 00:01:04.453 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.719 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:04.719 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.719 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.719 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.719 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.719 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:04.980 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.980 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.980 [8/268] Linking static target lib/librte_kvargs.a 00:01:04.980 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.980 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:04.980 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.980 [12/268] Linking static target lib/librte_log.a 00:01:04.980 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:04.980 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.980 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.980 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:05.551 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.812 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:05.812 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:05.812 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:05.812 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:05.812 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:05.812 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:05.812 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:05.813 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:05.813 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:05.813 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:05.813 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:05.813 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:05.813 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:05.813 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:05.813 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:05.813 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:05.813 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:05.813 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:05.813 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:05.813 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:05.813 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:05.813 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:05.813 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:05.813 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:05.813 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:05.813 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:05.813 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:05.813 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:05.813 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:05.813 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:05.813 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:05.813 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:05.813 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:05.813 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:05.813 [52/268] Linking static target lib/librte_telemetry.a 00:01:05.813 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:05.813 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:05.813 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:05.813 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:05.813 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:05.813 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:06.077 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:06.077 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:06.077 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:06.077 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.077 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:06.077 [64/268] Linking target lib/librte_log.so.24.1 00:01:06.077 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.077 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:06.347 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.347 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:06.347 [69/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:06.347 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.347 [71/268] Linking static target lib/librte_pci.a 00:01:06.347 [72/268] Linking target lib/librte_kvargs.so.24.1 00:01:06.347 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.611 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.611 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.611 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.611 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.611 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.611 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:06.611 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.611 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:06.611 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:06.611 [83/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:06.611 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.611 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.611 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:06.611 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.611 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.611 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:06.871 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:06.871 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:06.871 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:06.871 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:06.871 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:06.871 [95/268] Linking static target lib/librte_ring.a 00:01:06.871 [96/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:06.871 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:06.871 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.871 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:06.871 [100/268] Linking static target lib/librte_meter.a 00:01:06.871 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.871 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:06.871 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.871 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.871 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.871 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.871 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.871 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.871 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:06.871 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:06.871 [111/268] Linking target lib/librte_telemetry.so.24.1 00:01:06.872 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:06.872 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:06.872 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:06.872 [115/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:07.132 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:07.132 [117/268] Linking static target lib/librte_mempool.a 00:01:07.132 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:07.133 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:07.133 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.133 [121/268] Linking static target lib/librte_rcu.a 00:01:07.133 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.133 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.133 [124/268] Linking static target lib/librte_eal.a 00:01:07.133 [125/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.133 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.133 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.133 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.395 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.395 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:07.395 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.395 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.395 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.395 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.395 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:07.395 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.395 [137/268] Linking static target lib/librte_net.a 00:01:07.395 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.395 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.395 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.395 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:07.656 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.656 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.656 [144/268] Linking static target lib/librte_cmdline.a 00:01:07.656 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:07.656 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:07.656 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.656 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.656 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.656 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.914 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:07.914 [152/268] Linking static target lib/librte_timer.a 00:01:07.914 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:07.914 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.914 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:07.914 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.914 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.914 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.914 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.914 [160/268] Linking static target lib/librte_dmadev.a 00:01:07.914 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:07.914 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:08.171 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.171 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.171 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.171 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.171 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.171 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.171 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.171 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.171 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.171 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.171 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.171 [174/268] Linking static target lib/librte_compressdev.a 00:01:08.171 [175/268] Linking static target lib/librte_power.a 00:01:08.171 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.171 [177/268] Linking static target lib/librte_hash.a 00:01:08.431 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.431 [179/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.431 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.431 [181/268] Linking static target lib/librte_reorder.a 00:01:08.431 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.431 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.431 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.431 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.431 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.431 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.431 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.431 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.431 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.431 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.431 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.431 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.431 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.431 [195/268] Linking static target lib/librte_mbuf.a 00:01:08.689 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.689 [197/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.689 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.689 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.689 [200/268] Linking static target lib/librte_security.a 00:01:08.689 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.689 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.689 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:08.689 [204/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.689 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.689 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.689 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.689 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.689 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.689 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:08.689 [211/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.689 [212/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.946 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.946 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.946 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.946 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.946 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.947 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.947 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.947 [220/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.947 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.947 [222/268] Linking static target drivers/librte_mempool_ring.a 00:01:08.947 [223/268] Linking static target lib/librte_cryptodev.a 00:01:08.947 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:09.204 [225/268] Linking static target lib/librte_ethdev.a 00:01:09.204 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.137 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.508 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.464 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.464 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.464 [231/268] Linking target lib/librte_eal.so.24.1 00:01:13.464 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:13.464 [233/268] Linking target lib/librte_ring.so.24.1 00:01:13.464 [234/268] Linking target lib/librte_pci.so.24.1 00:01:13.464 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:13.464 [236/268] Linking target lib/librte_meter.so.24.1 00:01:13.464 [237/268] Linking target lib/librte_timer.so.24.1 00:01:13.464 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:13.464 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:13.464 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:13.464 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:13.464 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:13.464 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:13.724 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:13.724 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:13.724 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:13.724 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:13.724 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:13.724 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:13.724 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:13.981 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:13.981 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:13.981 [253/268] Linking target lib/librte_net.so.24.1 00:01:13.981 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:13.981 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:13.981 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:13.981 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:13.981 [258/268] Linking target lib/librte_hash.so.24.1 00:01:13.981 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:13.981 [260/268] Linking target lib/librte_security.so.24.1 00:01:14.238 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:14.238 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:14.238 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:14.238 [264/268] Linking target lib/librte_power.so.24.1 00:01:16.760 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:16.760 [266/268] Linking static target lib/librte_vhost.a 00:01:18.133 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.133 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:18.133 INFO: autodetecting backend as ninja 00:01:18.133 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:19.064 CC lib/log/log.o 00:01:19.064 CC lib/log/log_flags.o 00:01:19.064 CC lib/log/log_deprecated.o 00:01:19.064 CC lib/ut/ut.o 00:01:19.064 CC lib/ut_mock/mock.o 00:01:19.064 LIB libspdk_log.a 00:01:19.064 LIB libspdk_ut.a 00:01:19.064 LIB libspdk_ut_mock.a 00:01:19.064 SO libspdk_ut.so.2.0 00:01:19.064 SO libspdk_ut_mock.so.6.0 00:01:19.064 SO libspdk_log.so.7.0 00:01:19.064 SYMLINK libspdk_ut.so 00:01:19.064 SYMLINK libspdk_ut_mock.so 00:01:19.064 SYMLINK libspdk_log.so 00:01:19.322 CXX lib/trace_parser/trace.o 00:01:19.322 CC lib/ioat/ioat.o 00:01:19.322 CC lib/dma/dma.o 00:01:19.322 CC lib/util/base64.o 00:01:19.322 CC lib/util/bit_array.o 00:01:19.322 CC lib/util/cpuset.o 00:01:19.322 CC lib/util/crc16.o 00:01:19.322 CC lib/util/crc32.o 00:01:19.322 CC lib/util/crc32c.o 00:01:19.322 CC lib/util/crc32_ieee.o 00:01:19.322 CC lib/util/crc64.o 00:01:19.322 CC lib/util/dif.o 00:01:19.322 CC lib/util/fd.o 00:01:19.322 CC lib/util/file.o 00:01:19.322 CC lib/util/hexlify.o 00:01:19.322 CC lib/util/iov.o 00:01:19.322 CC lib/util/math.o 00:01:19.322 CC lib/util/pipe.o 00:01:19.322 CC lib/util/strerror_tls.o 00:01:19.322 CC lib/util/string.o 00:01:19.322 CC lib/util/uuid.o 00:01:19.322 CC lib/util/fd_group.o 00:01:19.322 CC lib/util/xor.o 00:01:19.322 CC lib/util/zipf.o 00:01:19.322 CC lib/vfio_user/host/vfio_user_pci.o 00:01:19.322 CC lib/vfio_user/host/vfio_user.o 00:01:19.579 LIB libspdk_dma.a 00:01:19.579 SO libspdk_dma.so.4.0 00:01:19.579 SYMLINK libspdk_dma.so 00:01:19.579 LIB libspdk_ioat.a 00:01:19.579 SO libspdk_ioat.so.7.0 00:01:19.579 LIB libspdk_vfio_user.a 00:01:19.579 SYMLINK libspdk_ioat.so 00:01:19.836 SO libspdk_vfio_user.so.5.0 00:01:19.836 SYMLINK libspdk_vfio_user.so 00:01:19.836 LIB libspdk_util.a 00:01:19.836 SO libspdk_util.so.9.1 00:01:20.093 SYMLINK libspdk_util.so 00:01:20.350 CC lib/json/json_parse.o 00:01:20.350 CC lib/conf/conf.o 00:01:20.350 CC lib/idxd/idxd.o 00:01:20.350 CC lib/vmd/vmd.o 00:01:20.350 CC lib/json/json_util.o 00:01:20.350 CC lib/rdma_utils/rdma_utils.o 00:01:20.350 CC lib/idxd/idxd_user.o 00:01:20.350 CC lib/vmd/led.o 00:01:20.350 CC lib/env_dpdk/env.o 00:01:20.350 CC lib/rdma_provider/common.o 00:01:20.350 CC lib/idxd/idxd_kernel.o 00:01:20.350 CC lib/json/json_write.o 00:01:20.350 CC lib/env_dpdk/memory.o 00:01:20.350 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:20.350 CC lib/env_dpdk/pci.o 00:01:20.350 CC lib/env_dpdk/init.o 00:01:20.350 CC lib/env_dpdk/threads.o 00:01:20.350 CC lib/env_dpdk/pci_ioat.o 00:01:20.350 CC lib/env_dpdk/pci_virtio.o 00:01:20.350 CC lib/env_dpdk/pci_vmd.o 00:01:20.350 CC lib/env_dpdk/pci_idxd.o 00:01:20.350 CC lib/env_dpdk/pci_event.o 00:01:20.350 CC lib/env_dpdk/sigbus_handler.o 00:01:20.350 CC lib/env_dpdk/pci_dpdk.o 00:01:20.350 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:20.350 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:20.350 LIB libspdk_trace_parser.a 00:01:20.350 SO libspdk_trace_parser.so.5.0 00:01:20.350 SYMLINK libspdk_trace_parser.so 00:01:20.350 LIB libspdk_rdma_provider.a 00:01:20.606 SO libspdk_rdma_provider.so.6.0 00:01:20.606 LIB libspdk_conf.a 00:01:20.606 SO libspdk_conf.so.6.0 00:01:20.606 SYMLINK libspdk_rdma_provider.so 00:01:20.606 LIB libspdk_rdma_utils.a 00:01:20.606 LIB libspdk_json.a 00:01:20.606 SYMLINK libspdk_conf.so 00:01:20.606 SO libspdk_rdma_utils.so.1.0 00:01:20.606 SO libspdk_json.so.6.0 00:01:20.606 SYMLINK libspdk_rdma_utils.so 00:01:20.606 SYMLINK libspdk_json.so 00:01:20.864 CC lib/jsonrpc/jsonrpc_server.o 00:01:20.864 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:20.864 CC lib/jsonrpc/jsonrpc_client.o 00:01:20.864 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:20.864 LIB libspdk_idxd.a 00:01:20.864 SO libspdk_idxd.so.12.0 00:01:20.864 LIB libspdk_vmd.a 00:01:20.864 SYMLINK libspdk_idxd.so 00:01:20.864 SO libspdk_vmd.so.6.0 00:01:21.122 SYMLINK libspdk_vmd.so 00:01:21.122 LIB libspdk_jsonrpc.a 00:01:21.122 SO libspdk_jsonrpc.so.6.0 00:01:21.122 SYMLINK libspdk_jsonrpc.so 00:01:21.378 CC lib/rpc/rpc.o 00:01:21.635 LIB libspdk_rpc.a 00:01:21.635 SO libspdk_rpc.so.6.0 00:01:21.635 SYMLINK libspdk_rpc.so 00:01:21.890 CC lib/keyring/keyring.o 00:01:21.890 CC lib/trace/trace.o 00:01:21.890 CC lib/notify/notify.o 00:01:21.890 CC lib/notify/notify_rpc.o 00:01:21.890 CC lib/keyring/keyring_rpc.o 00:01:21.890 CC lib/trace/trace_flags.o 00:01:21.890 CC lib/trace/trace_rpc.o 00:01:21.890 LIB libspdk_notify.a 00:01:21.890 SO libspdk_notify.so.6.0 00:01:22.147 LIB libspdk_keyring.a 00:01:22.147 SYMLINK libspdk_notify.so 00:01:22.147 LIB libspdk_trace.a 00:01:22.147 SO libspdk_keyring.so.1.0 00:01:22.147 SO libspdk_trace.so.10.0 00:01:22.147 SYMLINK libspdk_keyring.so 00:01:22.147 SYMLINK libspdk_trace.so 00:01:22.404 LIB libspdk_env_dpdk.a 00:01:22.404 CC lib/sock/sock.o 00:01:22.404 CC lib/sock/sock_rpc.o 00:01:22.404 CC lib/thread/thread.o 00:01:22.404 CC lib/thread/iobuf.o 00:01:22.404 SO libspdk_env_dpdk.so.14.1 00:01:22.404 SYMLINK libspdk_env_dpdk.so 00:01:22.660 LIB libspdk_sock.a 00:01:22.660 SO libspdk_sock.so.10.0 00:01:22.660 SYMLINK libspdk_sock.so 00:01:22.917 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:22.917 CC lib/nvme/nvme_ctrlr.o 00:01:22.917 CC lib/nvme/nvme_fabric.o 00:01:22.917 CC lib/nvme/nvme_ns_cmd.o 00:01:22.917 CC lib/nvme/nvme_ns.o 00:01:22.917 CC lib/nvme/nvme_pcie_common.o 00:01:22.917 CC lib/nvme/nvme_pcie.o 00:01:22.917 CC lib/nvme/nvme_qpair.o 00:01:22.917 CC lib/nvme/nvme.o 00:01:22.917 CC lib/nvme/nvme_quirks.o 00:01:22.917 CC lib/nvme/nvme_transport.o 00:01:22.917 CC lib/nvme/nvme_discovery.o 00:01:22.917 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:22.917 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:22.918 CC lib/nvme/nvme_tcp.o 00:01:22.918 CC lib/nvme/nvme_opal.o 00:01:22.918 CC lib/nvme/nvme_io_msg.o 00:01:22.918 CC lib/nvme/nvme_poll_group.o 00:01:22.918 CC lib/nvme/nvme_zns.o 00:01:22.918 CC lib/nvme/nvme_stubs.o 00:01:22.918 CC lib/nvme/nvme_auth.o 00:01:22.918 CC lib/nvme/nvme_cuse.o 00:01:22.918 CC lib/nvme/nvme_rdma.o 00:01:22.918 CC lib/nvme/nvme_vfio_user.o 00:01:23.852 LIB libspdk_thread.a 00:01:23.852 SO libspdk_thread.so.10.1 00:01:24.110 SYMLINK libspdk_thread.so 00:01:24.110 CC lib/virtio/virtio.o 00:01:24.110 CC lib/accel/accel.o 00:01:24.110 CC lib/vfu_tgt/tgt_endpoint.o 00:01:24.110 CC lib/accel/accel_rpc.o 00:01:24.110 CC lib/virtio/virtio_vhost_user.o 00:01:24.110 CC lib/init/json_config.o 00:01:24.110 CC lib/blob/blobstore.o 00:01:24.110 CC lib/accel/accel_sw.o 00:01:24.110 CC lib/vfu_tgt/tgt_rpc.o 00:01:24.110 CC lib/virtio/virtio_vfio_user.o 00:01:24.110 CC lib/init/subsystem.o 00:01:24.110 CC lib/blob/request.o 00:01:24.110 CC lib/init/subsystem_rpc.o 00:01:24.110 CC lib/blob/zeroes.o 00:01:24.110 CC lib/virtio/virtio_pci.o 00:01:24.110 CC lib/init/rpc.o 00:01:24.110 CC lib/blob/blob_bs_dev.o 00:01:24.368 LIB libspdk_init.a 00:01:24.368 SO libspdk_init.so.5.0 00:01:24.626 LIB libspdk_virtio.a 00:01:24.626 LIB libspdk_vfu_tgt.a 00:01:24.626 SYMLINK libspdk_init.so 00:01:24.626 SO libspdk_vfu_tgt.so.3.0 00:01:24.626 SO libspdk_virtio.so.7.0 00:01:24.626 SYMLINK libspdk_vfu_tgt.so 00:01:24.626 SYMLINK libspdk_virtio.so 00:01:24.626 CC lib/event/app.o 00:01:24.626 CC lib/event/reactor.o 00:01:24.626 CC lib/event/log_rpc.o 00:01:24.626 CC lib/event/app_rpc.o 00:01:24.626 CC lib/event/scheduler_static.o 00:01:25.191 LIB libspdk_event.a 00:01:25.191 SO libspdk_event.so.14.0 00:01:25.191 SYMLINK libspdk_event.so 00:01:25.191 LIB libspdk_accel.a 00:01:25.191 SO libspdk_accel.so.15.1 00:01:25.191 SYMLINK libspdk_accel.so 00:01:25.449 CC lib/bdev/bdev.o 00:01:25.449 CC lib/bdev/bdev_rpc.o 00:01:25.449 CC lib/bdev/bdev_zone.o 00:01:25.449 CC lib/bdev/part.o 00:01:25.449 CC lib/bdev/scsi_nvme.o 00:01:25.449 LIB libspdk_nvme.a 00:01:25.706 SO libspdk_nvme.so.13.1 00:01:25.963 SYMLINK libspdk_nvme.so 00:01:27.334 LIB libspdk_blob.a 00:01:27.334 SO libspdk_blob.so.11.0 00:01:27.334 SYMLINK libspdk_blob.so 00:01:27.334 CC lib/blobfs/blobfs.o 00:01:27.334 CC lib/blobfs/tree.o 00:01:27.334 CC lib/lvol/lvol.o 00:01:27.901 LIB libspdk_bdev.a 00:01:27.901 SO libspdk_bdev.so.15.1 00:01:28.167 SYMLINK libspdk_bdev.so 00:01:28.167 LIB libspdk_blobfs.a 00:01:28.167 SO libspdk_blobfs.so.10.0 00:01:28.167 CC lib/scsi/dev.o 00:01:28.167 CC lib/nbd/nbd.o 00:01:28.167 CC lib/ublk/ublk.o 00:01:28.167 CC lib/nbd/nbd_rpc.o 00:01:28.167 CC lib/scsi/lun.o 00:01:28.167 CC lib/nvmf/ctrlr.o 00:01:28.167 CC lib/ftl/ftl_core.o 00:01:28.167 CC lib/ublk/ublk_rpc.o 00:01:28.167 CC lib/scsi/port.o 00:01:28.167 CC lib/nvmf/ctrlr_discovery.o 00:01:28.167 CC lib/ftl/ftl_init.o 00:01:28.167 CC lib/scsi/scsi.o 00:01:28.167 CC lib/nvmf/ctrlr_bdev.o 00:01:28.167 CC lib/ftl/ftl_layout.o 00:01:28.167 CC lib/nvmf/subsystem.o 00:01:28.167 CC lib/ftl/ftl_debug.o 00:01:28.167 CC lib/scsi/scsi_bdev.o 00:01:28.167 CC lib/ftl/ftl_io.o 00:01:28.167 CC lib/nvmf/nvmf.o 00:01:28.167 CC lib/scsi/scsi_pr.o 00:01:28.167 CC lib/nvmf/nvmf_rpc.o 00:01:28.167 CC lib/ftl/ftl_sb.o 00:01:28.167 CC lib/scsi/scsi_rpc.o 00:01:28.167 CC lib/nvmf/transport.o 00:01:28.167 CC lib/ftl/ftl_l2p.o 00:01:28.167 CC lib/nvmf/tcp.o 00:01:28.167 CC lib/scsi/task.o 00:01:28.167 CC lib/ftl/ftl_l2p_flat.o 00:01:28.167 CC lib/nvmf/stubs.o 00:01:28.167 CC lib/ftl/ftl_nv_cache.o 00:01:28.167 CC lib/ftl/ftl_band.o 00:01:28.167 CC lib/nvmf/mdns_server.o 00:01:28.167 CC lib/nvmf/vfio_user.o 00:01:28.167 CC lib/ftl/ftl_band_ops.o 00:01:28.167 CC lib/ftl/ftl_writer.o 00:01:28.167 CC lib/nvmf/rdma.o 00:01:28.167 CC lib/ftl/ftl_rq.o 00:01:28.167 CC lib/nvmf/auth.o 00:01:28.167 CC lib/ftl/ftl_reloc.o 00:01:28.167 CC lib/ftl/ftl_l2p_cache.o 00:01:28.167 CC lib/ftl/ftl_p2l.o 00:01:28.167 CC lib/ftl/mngt/ftl_mngt.o 00:01:28.167 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:28.167 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:28.167 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:28.167 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:28.427 SYMLINK libspdk_blobfs.so 00:01:28.427 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:28.427 LIB libspdk_lvol.a 00:01:28.427 SO libspdk_lvol.so.10.0 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:28.691 SYMLINK libspdk_lvol.so 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:28.691 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:28.691 CC lib/ftl/utils/ftl_conf.o 00:01:28.691 CC lib/ftl/utils/ftl_md.o 00:01:28.691 CC lib/ftl/utils/ftl_mempool.o 00:01:28.691 CC lib/ftl/utils/ftl_bitmap.o 00:01:28.691 CC lib/ftl/utils/ftl_property.o 00:01:28.691 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:28.691 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:28.691 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:28.691 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:28.691 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:28.691 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:28.691 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:28.949 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:28.949 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:28.949 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:28.949 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:28.949 CC lib/ftl/base/ftl_base_dev.o 00:01:28.949 CC lib/ftl/base/ftl_base_bdev.o 00:01:28.949 CC lib/ftl/ftl_trace.o 00:01:28.949 LIB libspdk_nbd.a 00:01:28.949 SO libspdk_nbd.so.7.0 00:01:29.206 SYMLINK libspdk_nbd.so 00:01:29.206 LIB libspdk_scsi.a 00:01:29.206 SO libspdk_scsi.so.9.0 00:01:29.488 LIB libspdk_ublk.a 00:01:29.489 SYMLINK libspdk_scsi.so 00:01:29.489 SO libspdk_ublk.so.3.0 00:01:29.489 SYMLINK libspdk_ublk.so 00:01:29.489 CC lib/vhost/vhost.o 00:01:29.489 CC lib/iscsi/conn.o 00:01:29.489 CC lib/iscsi/init_grp.o 00:01:29.489 CC lib/vhost/vhost_rpc.o 00:01:29.489 CC lib/iscsi/iscsi.o 00:01:29.489 CC lib/vhost/vhost_scsi.o 00:01:29.489 CC lib/iscsi/md5.o 00:01:29.489 CC lib/vhost/vhost_blk.o 00:01:29.489 CC lib/iscsi/param.o 00:01:29.489 CC lib/vhost/rte_vhost_user.o 00:01:29.489 CC lib/iscsi/portal_grp.o 00:01:29.489 CC lib/iscsi/tgt_node.o 00:01:29.489 CC lib/iscsi/iscsi_subsystem.o 00:01:29.489 CC lib/iscsi/iscsi_rpc.o 00:01:29.489 CC lib/iscsi/task.o 00:01:29.747 LIB libspdk_ftl.a 00:01:29.747 SO libspdk_ftl.so.9.0 00:01:30.310 SYMLINK libspdk_ftl.so 00:01:30.873 LIB libspdk_vhost.a 00:01:30.873 SO libspdk_vhost.so.8.0 00:01:30.873 LIB libspdk_nvmf.a 00:01:30.873 SYMLINK libspdk_vhost.so 00:01:30.873 SO libspdk_nvmf.so.18.1 00:01:30.873 LIB libspdk_iscsi.a 00:01:30.873 SO libspdk_iscsi.so.8.0 00:01:31.131 SYMLINK libspdk_nvmf.so 00:01:31.131 SYMLINK libspdk_iscsi.so 00:01:31.388 CC module/env_dpdk/env_dpdk_rpc.o 00:01:31.388 CC module/vfu_device/vfu_virtio.o 00:01:31.388 CC module/vfu_device/vfu_virtio_blk.o 00:01:31.388 CC module/vfu_device/vfu_virtio_scsi.o 00:01:31.388 CC module/vfu_device/vfu_virtio_rpc.o 00:01:31.388 CC module/keyring/linux/keyring.o 00:01:31.388 CC module/keyring/file/keyring.o 00:01:31.388 CC module/blob/bdev/blob_bdev.o 00:01:31.388 CC module/sock/posix/posix.o 00:01:31.388 CC module/scheduler/gscheduler/gscheduler.o 00:01:31.388 CC module/accel/ioat/accel_ioat.o 00:01:31.388 CC module/accel/dsa/accel_dsa.o 00:01:31.388 CC module/keyring/file/keyring_rpc.o 00:01:31.388 CC module/keyring/linux/keyring_rpc.o 00:01:31.388 CC module/accel/ioat/accel_ioat_rpc.o 00:01:31.388 CC module/accel/dsa/accel_dsa_rpc.o 00:01:31.388 CC module/accel/error/accel_error.o 00:01:31.388 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:31.388 CC module/accel/error/accel_error_rpc.o 00:01:31.388 CC module/accel/iaa/accel_iaa.o 00:01:31.388 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:31.388 CC module/accel/iaa/accel_iaa_rpc.o 00:01:31.658 LIB libspdk_env_dpdk_rpc.a 00:01:31.658 SO libspdk_env_dpdk_rpc.so.6.0 00:01:31.658 SYMLINK libspdk_env_dpdk_rpc.so 00:01:31.658 LIB libspdk_keyring_linux.a 00:01:31.658 LIB libspdk_scheduler_dpdk_governor.a 00:01:31.658 SO libspdk_keyring_linux.so.1.0 00:01:31.658 LIB libspdk_scheduler_gscheduler.a 00:01:31.658 LIB libspdk_accel_error.a 00:01:31.658 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:31.658 LIB libspdk_accel_ioat.a 00:01:31.658 LIB libspdk_scheduler_dynamic.a 00:01:31.658 SO libspdk_scheduler_gscheduler.so.4.0 00:01:31.658 SO libspdk_accel_error.so.2.0 00:01:31.658 LIB libspdk_keyring_file.a 00:01:31.658 LIB libspdk_accel_iaa.a 00:01:31.658 SO libspdk_accel_ioat.so.6.0 00:01:31.658 SO libspdk_scheduler_dynamic.so.4.0 00:01:31.658 SYMLINK libspdk_keyring_linux.so 00:01:31.658 SO libspdk_keyring_file.so.1.0 00:01:31.658 SO libspdk_accel_iaa.so.3.0 00:01:31.658 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:31.658 SYMLINK libspdk_scheduler_gscheduler.so 00:01:31.915 SYMLINK libspdk_accel_error.so 00:01:31.915 LIB libspdk_blob_bdev.a 00:01:31.915 SYMLINK libspdk_scheduler_dynamic.so 00:01:31.915 SYMLINK libspdk_accel_ioat.so 00:01:31.915 SYMLINK libspdk_keyring_file.so 00:01:31.915 SYMLINK libspdk_accel_iaa.so 00:01:31.915 SO libspdk_blob_bdev.so.11.0 00:01:31.915 LIB libspdk_accel_dsa.a 00:01:31.915 SYMLINK libspdk_blob_bdev.so 00:01:31.915 SO libspdk_accel_dsa.so.5.0 00:01:31.915 SYMLINK libspdk_accel_dsa.so 00:01:32.173 LIB libspdk_vfu_device.a 00:01:32.173 SO libspdk_vfu_device.so.3.0 00:01:32.173 CC module/bdev/malloc/bdev_malloc.o 00:01:32.173 CC module/bdev/error/vbdev_error.o 00:01:32.173 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:32.173 CC module/bdev/delay/vbdev_delay.o 00:01:32.173 CC module/bdev/nvme/bdev_nvme.o 00:01:32.173 CC module/bdev/error/vbdev_error_rpc.o 00:01:32.173 CC module/blobfs/bdev/blobfs_bdev.o 00:01:32.173 CC module/bdev/null/bdev_null.o 00:01:32.173 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:32.173 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:32.173 CC module/bdev/null/bdev_null_rpc.o 00:01:32.173 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:32.173 CC module/bdev/split/vbdev_split.o 00:01:32.173 CC module/bdev/gpt/gpt.o 00:01:32.173 CC module/bdev/passthru/vbdev_passthru.o 00:01:32.173 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:32.173 CC module/bdev/raid/bdev_raid.o 00:01:32.173 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:32.173 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:32.173 CC module/bdev/nvme/nvme_rpc.o 00:01:32.173 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:32.173 CC module/bdev/gpt/vbdev_gpt.o 00:01:32.173 CC module/bdev/nvme/bdev_mdns_client.o 00:01:32.173 CC module/bdev/split/vbdev_split_rpc.o 00:01:32.173 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:32.173 CC module/bdev/raid/bdev_raid_rpc.o 00:01:32.173 CC module/bdev/lvol/vbdev_lvol.o 00:01:32.173 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:32.173 CC module/bdev/nvme/vbdev_opal.o 00:01:32.173 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:32.173 CC module/bdev/raid/bdev_raid_sb.o 00:01:32.173 CC module/bdev/aio/bdev_aio.o 00:01:32.173 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:32.173 CC module/bdev/raid/raid0.o 00:01:32.173 CC module/bdev/aio/bdev_aio_rpc.o 00:01:32.173 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:32.173 CC module/bdev/ftl/bdev_ftl.o 00:01:32.173 CC module/bdev/raid/raid1.o 00:01:32.173 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:32.173 CC module/bdev/iscsi/bdev_iscsi.o 00:01:32.173 CC module/bdev/raid/concat.o 00:01:32.173 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:32.173 SYMLINK libspdk_vfu_device.so 00:01:32.449 LIB libspdk_sock_posix.a 00:01:32.450 SO libspdk_sock_posix.so.6.0 00:01:32.450 LIB libspdk_blobfs_bdev.a 00:01:32.450 SO libspdk_blobfs_bdev.so.6.0 00:01:32.450 SYMLINK libspdk_sock_posix.so 00:01:32.706 LIB libspdk_bdev_null.a 00:01:32.706 LIB libspdk_bdev_split.a 00:01:32.706 LIB libspdk_bdev_error.a 00:01:32.706 SO libspdk_bdev_split.so.6.0 00:01:32.706 SO libspdk_bdev_null.so.6.0 00:01:32.706 SYMLINK libspdk_blobfs_bdev.so 00:01:32.706 LIB libspdk_bdev_ftl.a 00:01:32.706 SO libspdk_bdev_error.so.6.0 00:01:32.706 LIB libspdk_bdev_iscsi.a 00:01:32.706 SO libspdk_bdev_ftl.so.6.0 00:01:32.706 LIB libspdk_bdev_gpt.a 00:01:32.706 SO libspdk_bdev_iscsi.so.6.0 00:01:32.706 SYMLINK libspdk_bdev_split.so 00:01:32.706 SYMLINK libspdk_bdev_null.so 00:01:32.706 SO libspdk_bdev_gpt.so.6.0 00:01:32.706 SYMLINK libspdk_bdev_error.so 00:01:32.706 LIB libspdk_bdev_passthru.a 00:01:32.706 LIB libspdk_bdev_malloc.a 00:01:32.706 SYMLINK libspdk_bdev_ftl.so 00:01:32.706 SO libspdk_bdev_passthru.so.6.0 00:01:32.706 SYMLINK libspdk_bdev_iscsi.so 00:01:32.706 LIB libspdk_bdev_zone_block.a 00:01:32.706 SO libspdk_bdev_malloc.so.6.0 00:01:32.706 SYMLINK libspdk_bdev_gpt.so 00:01:32.706 LIB libspdk_bdev_aio.a 00:01:32.707 SO libspdk_bdev_zone_block.so.6.0 00:01:32.707 LIB libspdk_bdev_delay.a 00:01:32.707 LIB libspdk_bdev_virtio.a 00:01:32.707 SYMLINK libspdk_bdev_passthru.so 00:01:32.707 SO libspdk_bdev_aio.so.6.0 00:01:32.707 SYMLINK libspdk_bdev_malloc.so 00:01:32.707 SO libspdk_bdev_delay.so.6.0 00:01:32.707 SO libspdk_bdev_virtio.so.6.0 00:01:32.707 SYMLINK libspdk_bdev_zone_block.so 00:01:32.964 SYMLINK libspdk_bdev_aio.so 00:01:32.964 SYMLINK libspdk_bdev_delay.so 00:01:32.964 SYMLINK libspdk_bdev_virtio.so 00:01:32.964 LIB libspdk_bdev_lvol.a 00:01:32.964 SO libspdk_bdev_lvol.so.6.0 00:01:32.964 SYMLINK libspdk_bdev_lvol.so 00:01:33.221 LIB libspdk_bdev_raid.a 00:01:33.221 SO libspdk_bdev_raid.so.6.0 00:01:33.479 SYMLINK libspdk_bdev_raid.so 00:01:34.415 LIB libspdk_bdev_nvme.a 00:01:34.673 SO libspdk_bdev_nvme.so.7.0 00:01:34.673 SYMLINK libspdk_bdev_nvme.so 00:01:34.930 CC module/event/subsystems/scheduler/scheduler.o 00:01:34.930 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:34.930 CC module/event/subsystems/keyring/keyring.o 00:01:34.930 CC module/event/subsystems/vmd/vmd.o 00:01:34.930 CC module/event/subsystems/iobuf/iobuf.o 00:01:34.930 CC module/event/subsystems/sock/sock.o 00:01:34.930 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:34.930 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:34.930 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:35.187 LIB libspdk_event_keyring.a 00:01:35.187 LIB libspdk_event_vhost_blk.a 00:01:35.187 LIB libspdk_event_scheduler.a 00:01:35.187 LIB libspdk_event_vfu_tgt.a 00:01:35.187 LIB libspdk_event_vmd.a 00:01:35.187 LIB libspdk_event_sock.a 00:01:35.187 SO libspdk_event_keyring.so.1.0 00:01:35.187 SO libspdk_event_vhost_blk.so.3.0 00:01:35.187 SO libspdk_event_scheduler.so.4.0 00:01:35.187 LIB libspdk_event_iobuf.a 00:01:35.187 SO libspdk_event_vfu_tgt.so.3.0 00:01:35.187 SO libspdk_event_vmd.so.6.0 00:01:35.187 SO libspdk_event_sock.so.5.0 00:01:35.187 SO libspdk_event_iobuf.so.3.0 00:01:35.187 SYMLINK libspdk_event_keyring.so 00:01:35.187 SYMLINK libspdk_event_vhost_blk.so 00:01:35.187 SYMLINK libspdk_event_scheduler.so 00:01:35.187 SYMLINK libspdk_event_vfu_tgt.so 00:01:35.187 SYMLINK libspdk_event_sock.so 00:01:35.187 SYMLINK libspdk_event_vmd.so 00:01:35.187 SYMLINK libspdk_event_iobuf.so 00:01:35.444 CC module/event/subsystems/accel/accel.o 00:01:35.702 LIB libspdk_event_accel.a 00:01:35.702 SO libspdk_event_accel.so.6.0 00:01:35.702 SYMLINK libspdk_event_accel.so 00:01:35.993 CC module/event/subsystems/bdev/bdev.o 00:01:35.993 LIB libspdk_event_bdev.a 00:01:35.993 SO libspdk_event_bdev.so.6.0 00:01:35.993 SYMLINK libspdk_event_bdev.so 00:01:36.251 CC module/event/subsystems/scsi/scsi.o 00:01:36.251 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:36.251 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:36.251 CC module/event/subsystems/ublk/ublk.o 00:01:36.251 CC module/event/subsystems/nbd/nbd.o 00:01:36.509 LIB libspdk_event_nbd.a 00:01:36.509 LIB libspdk_event_ublk.a 00:01:36.509 LIB libspdk_event_scsi.a 00:01:36.509 SO libspdk_event_nbd.so.6.0 00:01:36.509 SO libspdk_event_ublk.so.3.0 00:01:36.509 SO libspdk_event_scsi.so.6.0 00:01:36.509 SYMLINK libspdk_event_nbd.so 00:01:36.509 SYMLINK libspdk_event_ublk.so 00:01:36.509 SYMLINK libspdk_event_scsi.so 00:01:36.509 LIB libspdk_event_nvmf.a 00:01:36.509 SO libspdk_event_nvmf.so.6.0 00:01:36.509 SYMLINK libspdk_event_nvmf.so 00:01:36.509 CC module/event/subsystems/iscsi/iscsi.o 00:01:36.509 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:36.767 LIB libspdk_event_vhost_scsi.a 00:01:36.767 LIB libspdk_event_iscsi.a 00:01:36.767 SO libspdk_event_vhost_scsi.so.3.0 00:01:36.767 SO libspdk_event_iscsi.so.6.0 00:01:36.767 SYMLINK libspdk_event_vhost_scsi.so 00:01:36.767 SYMLINK libspdk_event_iscsi.so 00:01:37.047 SO libspdk.so.6.0 00:01:37.047 SYMLINK libspdk.so 00:01:37.316 CXX app/trace/trace.o 00:01:37.316 CC app/spdk_top/spdk_top.o 00:01:37.316 CC app/spdk_nvme_perf/perf.o 00:01:37.316 CC app/trace_record/trace_record.o 00:01:37.316 TEST_HEADER include/spdk/accel.h 00:01:37.316 CC app/spdk_nvme_discover/discovery_aer.o 00:01:37.316 TEST_HEADER include/spdk/accel_module.h 00:01:37.316 TEST_HEADER include/spdk/assert.h 00:01:37.316 TEST_HEADER include/spdk/barrier.h 00:01:37.316 TEST_HEADER include/spdk/base64.h 00:01:37.316 TEST_HEADER include/spdk/bdev.h 00:01:37.316 CC test/rpc_client/rpc_client_test.o 00:01:37.316 CC app/spdk_nvme_identify/identify.o 00:01:37.316 TEST_HEADER include/spdk/bdev_module.h 00:01:37.316 TEST_HEADER include/spdk/bdev_zone.h 00:01:37.316 CC app/spdk_lspci/spdk_lspci.o 00:01:37.316 TEST_HEADER include/spdk/bit_array.h 00:01:37.316 TEST_HEADER include/spdk/bit_pool.h 00:01:37.316 TEST_HEADER include/spdk/blob_bdev.h 00:01:37.316 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:37.316 TEST_HEADER include/spdk/blobfs.h 00:01:37.316 TEST_HEADER include/spdk/blob.h 00:01:37.316 TEST_HEADER include/spdk/conf.h 00:01:37.316 TEST_HEADER include/spdk/config.h 00:01:37.316 TEST_HEADER include/spdk/crc16.h 00:01:37.316 TEST_HEADER include/spdk/cpuset.h 00:01:37.316 TEST_HEADER include/spdk/crc32.h 00:01:37.316 TEST_HEADER include/spdk/crc64.h 00:01:37.316 TEST_HEADER include/spdk/dif.h 00:01:37.316 TEST_HEADER include/spdk/dma.h 00:01:37.316 TEST_HEADER include/spdk/endian.h 00:01:37.316 TEST_HEADER include/spdk/env_dpdk.h 00:01:37.316 TEST_HEADER include/spdk/env.h 00:01:37.316 TEST_HEADER include/spdk/event.h 00:01:37.316 TEST_HEADER include/spdk/fd_group.h 00:01:37.316 TEST_HEADER include/spdk/fd.h 00:01:37.316 TEST_HEADER include/spdk/file.h 00:01:37.316 TEST_HEADER include/spdk/gpt_spec.h 00:01:37.316 TEST_HEADER include/spdk/ftl.h 00:01:37.316 TEST_HEADER include/spdk/hexlify.h 00:01:37.316 TEST_HEADER include/spdk/histogram_data.h 00:01:37.316 TEST_HEADER include/spdk/idxd.h 00:01:37.316 TEST_HEADER include/spdk/idxd_spec.h 00:01:37.316 TEST_HEADER include/spdk/ioat.h 00:01:37.316 TEST_HEADER include/spdk/init.h 00:01:37.316 TEST_HEADER include/spdk/ioat_spec.h 00:01:37.316 TEST_HEADER include/spdk/json.h 00:01:37.316 TEST_HEADER include/spdk/iscsi_spec.h 00:01:37.316 TEST_HEADER include/spdk/jsonrpc.h 00:01:37.316 TEST_HEADER include/spdk/keyring.h 00:01:37.316 TEST_HEADER include/spdk/keyring_module.h 00:01:37.316 TEST_HEADER include/spdk/likely.h 00:01:37.316 TEST_HEADER include/spdk/log.h 00:01:37.316 TEST_HEADER include/spdk/lvol.h 00:01:37.316 TEST_HEADER include/spdk/memory.h 00:01:37.316 TEST_HEADER include/spdk/mmio.h 00:01:37.316 TEST_HEADER include/spdk/nbd.h 00:01:37.316 TEST_HEADER include/spdk/notify.h 00:01:37.316 TEST_HEADER include/spdk/nvme.h 00:01:37.316 TEST_HEADER include/spdk/nvme_intel.h 00:01:37.316 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:37.316 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:37.316 TEST_HEADER include/spdk/nvme_spec.h 00:01:37.316 TEST_HEADER include/spdk/nvme_zns.h 00:01:37.316 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:37.316 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:37.316 TEST_HEADER include/spdk/nvmf.h 00:01:37.316 TEST_HEADER include/spdk/nvmf_spec.h 00:01:37.316 TEST_HEADER include/spdk/nvmf_transport.h 00:01:37.316 TEST_HEADER include/spdk/opal.h 00:01:37.316 TEST_HEADER include/spdk/opal_spec.h 00:01:37.316 TEST_HEADER include/spdk/pci_ids.h 00:01:37.316 TEST_HEADER include/spdk/pipe.h 00:01:37.316 TEST_HEADER include/spdk/queue.h 00:01:37.316 TEST_HEADER include/spdk/reduce.h 00:01:37.316 TEST_HEADER include/spdk/rpc.h 00:01:37.316 TEST_HEADER include/spdk/scheduler.h 00:01:37.316 TEST_HEADER include/spdk/scsi.h 00:01:37.316 TEST_HEADER include/spdk/scsi_spec.h 00:01:37.316 TEST_HEADER include/spdk/sock.h 00:01:37.316 TEST_HEADER include/spdk/stdinc.h 00:01:37.316 TEST_HEADER include/spdk/string.h 00:01:37.316 TEST_HEADER include/spdk/thread.h 00:01:37.316 TEST_HEADER include/spdk/trace.h 00:01:37.316 TEST_HEADER include/spdk/trace_parser.h 00:01:37.316 TEST_HEADER include/spdk/tree.h 00:01:37.316 TEST_HEADER include/spdk/ublk.h 00:01:37.316 TEST_HEADER include/spdk/util.h 00:01:37.316 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:37.316 TEST_HEADER include/spdk/uuid.h 00:01:37.316 TEST_HEADER include/spdk/version.h 00:01:37.316 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:37.316 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:37.316 TEST_HEADER include/spdk/vhost.h 00:01:37.316 TEST_HEADER include/spdk/vmd.h 00:01:37.316 TEST_HEADER include/spdk/xor.h 00:01:37.316 TEST_HEADER include/spdk/zipf.h 00:01:37.316 CXX test/cpp_headers/accel.o 00:01:37.316 CXX test/cpp_headers/accel_module.o 00:01:37.316 CXX test/cpp_headers/assert.o 00:01:37.316 CXX test/cpp_headers/barrier.o 00:01:37.316 CXX test/cpp_headers/base64.o 00:01:37.316 CXX test/cpp_headers/bdev.o 00:01:37.316 CXX test/cpp_headers/bdev_module.o 00:01:37.316 CXX test/cpp_headers/bdev_zone.o 00:01:37.316 CC app/spdk_dd/spdk_dd.o 00:01:37.316 CXX test/cpp_headers/bit_array.o 00:01:37.316 CXX test/cpp_headers/bit_pool.o 00:01:37.316 CXX test/cpp_headers/blob_bdev.o 00:01:37.316 CXX test/cpp_headers/blobfs_bdev.o 00:01:37.316 CXX test/cpp_headers/blobfs.o 00:01:37.316 CXX test/cpp_headers/blob.o 00:01:37.316 CXX test/cpp_headers/conf.o 00:01:37.316 CXX test/cpp_headers/config.o 00:01:37.316 CXX test/cpp_headers/cpuset.o 00:01:37.316 CXX test/cpp_headers/crc16.o 00:01:37.316 CC app/iscsi_tgt/iscsi_tgt.o 00:01:37.316 CC app/nvmf_tgt/nvmf_main.o 00:01:37.316 CXX test/cpp_headers/crc32.o 00:01:37.316 CC app/spdk_tgt/spdk_tgt.o 00:01:37.316 CC examples/ioat/verify/verify.o 00:01:37.316 CC test/app/histogram_perf/histogram_perf.o 00:01:37.316 CC test/env/memory/memory_ut.o 00:01:37.316 CC examples/ioat/perf/perf.o 00:01:37.316 CC test/thread/poller_perf/poller_perf.o 00:01:37.316 CC test/app/stub/stub.o 00:01:37.316 CC test/env/pci/pci_ut.o 00:01:37.316 CC app/fio/nvme/fio_plugin.o 00:01:37.316 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:37.316 CC test/app/jsoncat/jsoncat.o 00:01:37.316 CC examples/util/zipf/zipf.o 00:01:37.316 CC test/env/vtophys/vtophys.o 00:01:37.316 CC test/dma/test_dma/test_dma.o 00:01:37.316 CC test/app/bdev_svc/bdev_svc.o 00:01:37.316 CC app/fio/bdev/fio_plugin.o 00:01:37.578 LINK spdk_lspci 00:01:37.578 CC test/env/mem_callbacks/mem_callbacks.o 00:01:37.578 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:37.578 LINK rpc_client_test 00:01:37.578 LINK spdk_nvme_discover 00:01:37.578 LINK interrupt_tgt 00:01:37.578 LINK histogram_perf 00:01:37.578 LINK nvmf_tgt 00:01:37.578 LINK jsoncat 00:01:37.578 LINK poller_perf 00:01:37.578 LINK vtophys 00:01:37.578 CXX test/cpp_headers/crc64.o 00:01:37.578 LINK zipf 00:01:37.578 CXX test/cpp_headers/dif.o 00:01:37.578 CXX test/cpp_headers/dma.o 00:01:37.847 CXX test/cpp_headers/endian.o 00:01:37.847 CXX test/cpp_headers/env_dpdk.o 00:01:37.847 CXX test/cpp_headers/env.o 00:01:37.847 LINK env_dpdk_post_init 00:01:37.847 CXX test/cpp_headers/event.o 00:01:37.847 CXX test/cpp_headers/fd_group.o 00:01:37.847 CXX test/cpp_headers/fd.o 00:01:37.847 CXX test/cpp_headers/file.o 00:01:37.847 CXX test/cpp_headers/ftl.o 00:01:37.847 LINK stub 00:01:37.847 LINK spdk_trace_record 00:01:37.847 CXX test/cpp_headers/gpt_spec.o 00:01:37.847 LINK iscsi_tgt 00:01:37.847 CXX test/cpp_headers/hexlify.o 00:01:37.847 CXX test/cpp_headers/histogram_data.o 00:01:37.847 LINK verify 00:01:37.847 CXX test/cpp_headers/idxd.o 00:01:37.847 LINK spdk_tgt 00:01:37.847 LINK bdev_svc 00:01:37.847 CXX test/cpp_headers/idxd_spec.o 00:01:37.847 LINK ioat_perf 00:01:37.847 CXX test/cpp_headers/init.o 00:01:37.847 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:37.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:37.847 CXX test/cpp_headers/ioat.o 00:01:37.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:38.108 CXX test/cpp_headers/ioat_spec.o 00:01:38.108 LINK spdk_dd 00:01:38.108 CXX test/cpp_headers/iscsi_spec.o 00:01:38.108 CXX test/cpp_headers/json.o 00:01:38.108 CXX test/cpp_headers/jsonrpc.o 00:01:38.108 CXX test/cpp_headers/keyring.o 00:01:38.108 CXX test/cpp_headers/keyring_module.o 00:01:38.108 LINK pci_ut 00:01:38.108 LINK spdk_trace 00:01:38.108 CXX test/cpp_headers/likely.o 00:01:38.108 CXX test/cpp_headers/log.o 00:01:38.108 CXX test/cpp_headers/lvol.o 00:01:38.108 CXX test/cpp_headers/memory.o 00:01:38.108 CXX test/cpp_headers/mmio.o 00:01:38.108 CXX test/cpp_headers/nbd.o 00:01:38.108 CXX test/cpp_headers/notify.o 00:01:38.108 CXX test/cpp_headers/nvme.o 00:01:38.108 CXX test/cpp_headers/nvme_intel.o 00:01:38.108 LINK test_dma 00:01:38.108 CXX test/cpp_headers/nvme_ocssd.o 00:01:38.108 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:38.108 CXX test/cpp_headers/nvme_spec.o 00:01:38.108 CXX test/cpp_headers/nvme_zns.o 00:01:38.108 CXX test/cpp_headers/nvmf_cmd.o 00:01:38.108 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:38.108 CXX test/cpp_headers/nvmf.o 00:01:38.108 CXX test/cpp_headers/nvmf_spec.o 00:01:38.108 CXX test/cpp_headers/nvmf_transport.o 00:01:38.108 CXX test/cpp_headers/opal.o 00:01:38.370 CXX test/cpp_headers/opal_spec.o 00:01:38.370 CXX test/cpp_headers/pci_ids.o 00:01:38.370 CXX test/cpp_headers/pipe.o 00:01:38.370 LINK nvme_fuzz 00:01:38.370 CXX test/cpp_headers/queue.o 00:01:38.370 CC test/event/event_perf/event_perf.o 00:01:38.370 CC test/event/reactor/reactor.o 00:01:38.370 LINK spdk_bdev 00:01:38.370 CXX test/cpp_headers/reduce.o 00:01:38.370 CXX test/cpp_headers/rpc.o 00:01:38.370 CXX test/cpp_headers/scheduler.o 00:01:38.370 CC examples/idxd/perf/perf.o 00:01:38.370 LINK spdk_nvme 00:01:38.370 CC examples/sock/hello_world/hello_sock.o 00:01:38.370 CC examples/vmd/lsvmd/lsvmd.o 00:01:38.370 CC test/event/reactor_perf/reactor_perf.o 00:01:38.630 CXX test/cpp_headers/scsi.o 00:01:38.630 CC examples/thread/thread/thread_ex.o 00:01:38.630 CXX test/cpp_headers/scsi_spec.o 00:01:38.630 CXX test/cpp_headers/sock.o 00:01:38.630 CXX test/cpp_headers/stdinc.o 00:01:38.630 CC examples/vmd/led/led.o 00:01:38.630 CXX test/cpp_headers/string.o 00:01:38.630 CXX test/cpp_headers/thread.o 00:01:38.630 CXX test/cpp_headers/trace.o 00:01:38.630 CXX test/cpp_headers/trace_parser.o 00:01:38.630 CC test/event/app_repeat/app_repeat.o 00:01:38.630 CXX test/cpp_headers/tree.o 00:01:38.630 CXX test/cpp_headers/ublk.o 00:01:38.630 CXX test/cpp_headers/util.o 00:01:38.630 CXX test/cpp_headers/uuid.o 00:01:38.630 CXX test/cpp_headers/version.o 00:01:38.630 CXX test/cpp_headers/vfio_user_pci.o 00:01:38.630 CXX test/cpp_headers/vfio_user_spec.o 00:01:38.630 CXX test/cpp_headers/vhost.o 00:01:38.630 CXX test/cpp_headers/vmd.o 00:01:38.630 CXX test/cpp_headers/xor.o 00:01:38.630 CXX test/cpp_headers/zipf.o 00:01:38.630 LINK mem_callbacks 00:01:38.630 CC test/event/scheduler/scheduler.o 00:01:38.630 LINK spdk_nvme_perf 00:01:38.630 CC app/vhost/vhost.o 00:01:38.630 LINK reactor 00:01:38.630 LINK event_perf 00:01:38.630 LINK vhost_fuzz 00:01:38.892 LINK lsvmd 00:01:38.892 LINK spdk_nvme_identify 00:01:38.892 LINK reactor_perf 00:01:38.892 LINK spdk_top 00:01:38.892 LINK led 00:01:38.892 LINK hello_sock 00:01:38.892 CC test/nvme/err_injection/err_injection.o 00:01:38.892 LINK app_repeat 00:01:38.892 CC test/nvme/overhead/overhead.o 00:01:38.892 CC test/nvme/reset/reset.o 00:01:38.892 CC test/nvme/e2edp/nvme_dp.o 00:01:38.892 CC test/nvme/aer/aer.o 00:01:38.892 CC test/nvme/sgl/sgl.o 00:01:38.892 CC test/nvme/startup/startup.o 00:01:38.892 CC test/nvme/reserve/reserve.o 00:01:38.892 CC test/accel/dif/dif.o 00:01:39.151 CC test/nvme/simple_copy/simple_copy.o 00:01:39.151 CC test/blobfs/mkfs/mkfs.o 00:01:39.151 CC test/nvme/connect_stress/connect_stress.o 00:01:39.151 LINK thread 00:01:39.151 CC test/lvol/esnap/esnap.o 00:01:39.151 CC test/nvme/boot_partition/boot_partition.o 00:01:39.151 CC test/nvme/compliance/nvme_compliance.o 00:01:39.152 CC test/nvme/fused_ordering/fused_ordering.o 00:01:39.152 LINK vhost 00:01:39.152 CC test/nvme/cuse/cuse.o 00:01:39.152 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:39.152 LINK idxd_perf 00:01:39.152 CC test/nvme/fdp/fdp.o 00:01:39.152 LINK scheduler 00:01:39.152 LINK startup 00:01:39.152 LINK err_injection 00:01:39.409 LINK boot_partition 00:01:39.409 LINK simple_copy 00:01:39.409 LINK reserve 00:01:39.409 LINK doorbell_aers 00:01:39.409 LINK fused_ordering 00:01:39.409 LINK connect_stress 00:01:39.409 LINK overhead 00:01:39.409 CC examples/nvme/abort/abort.o 00:01:39.409 CC examples/nvme/hotplug/hotplug.o 00:01:39.409 CC examples/nvme/reconnect/reconnect.o 00:01:39.409 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.409 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.409 CC examples/nvme/hello_world/hello_world.o 00:01:39.409 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.409 CC examples/nvme/arbitration/arbitration.o 00:01:39.409 LINK mkfs 00:01:39.409 LINK nvme_dp 00:01:39.409 LINK aer 00:01:39.409 LINK reset 00:01:39.409 LINK sgl 00:01:39.409 LINK nvme_compliance 00:01:39.409 LINK memory_ut 00:01:39.666 CC examples/accel/perf/accel_perf.o 00:01:39.666 LINK fdp 00:01:39.666 CC examples/blob/hello_world/hello_blob.o 00:01:39.666 CC examples/blob/cli/blobcli.o 00:01:39.666 LINK pmr_persistence 00:01:39.666 LINK dif 00:01:39.666 LINK cmb_copy 00:01:39.666 LINK hello_world 00:01:39.666 LINK hotplug 00:01:39.923 LINK reconnect 00:01:39.923 LINK abort 00:01:39.923 LINK hello_blob 00:01:39.923 LINK arbitration 00:01:39.923 LINK accel_perf 00:01:40.180 LINK nvme_manage 00:01:40.180 CC test/bdev/bdevio/bdevio.o 00:01:40.180 LINK blobcli 00:01:40.180 LINK iscsi_fuzz 00:01:40.436 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.436 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.436 LINK bdevio 00:01:40.693 LINK hello_bdev 00:01:40.693 LINK cuse 00:01:41.256 LINK bdevperf 00:01:41.513 CC examples/nvmf/nvmf/nvmf.o 00:01:41.770 LINK nvmf 00:01:44.297 LINK esnap 00:01:44.556 00:01:44.556 real 0m49.265s 00:01:44.556 user 10m11.666s 00:01:44.556 sys 2m29.177s 00:01:44.556 10:18:32 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:44.556 10:18:32 make -- common/autotest_common.sh@10 -- $ set +x 00:01:44.556 ************************************ 00:01:44.556 END TEST make 00:01:44.556 ************************************ 00:01:44.556 10:18:32 -- common/autotest_common.sh@1142 -- $ return 0 00:01:44.556 10:18:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.556 10:18:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:44.556 10:18:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:44.556 10:18:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.556 10:18:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.556 10:18:32 -- pm/common@44 -- $ pid=991002 00:01:44.556 10:18:32 -- pm/common@50 -- $ kill -TERM 991002 00:01:44.556 10:18:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.556 10:18:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.556 10:18:32 -- pm/common@44 -- $ pid=991004 00:01:44.556 10:18:32 -- pm/common@50 -- $ kill -TERM 991004 00:01:44.556 10:18:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.556 10:18:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.556 10:18:32 -- pm/common@44 -- $ pid=991006 00:01:44.556 10:18:32 -- pm/common@50 -- $ kill -TERM 991006 00:01:44.556 10:18:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.556 10:18:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.556 10:18:32 -- pm/common@44 -- $ pid=991033 00:01:44.556 10:18:32 -- pm/common@50 -- $ sudo -E kill -TERM 991033 00:01:44.556 10:18:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.556 10:18:32 -- nvmf/common.sh@7 -- # uname -s 00:01:44.556 10:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.556 10:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.556 10:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.556 10:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.557 10:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.557 10:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.557 10:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.557 10:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.557 10:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.557 10:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.557 10:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:01:44.557 10:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:01:44.557 10:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.557 10:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.557 10:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:44.557 10:18:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.557 10:18:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.557 10:18:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.557 10:18:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.557 10:18:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.557 10:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.557 10:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.557 10:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.557 10:18:32 -- paths/export.sh@5 -- # export PATH 00:01:44.557 10:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.557 10:18:32 -- nvmf/common.sh@47 -- # : 0 00:01:44.557 10:18:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.557 10:18:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.557 10:18:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.557 10:18:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.557 10:18:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.557 10:18:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.557 10:18:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.557 10:18:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.557 10:18:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.557 10:18:32 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.557 10:18:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.557 10:18:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.557 10:18:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.557 10:18:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.557 10:18:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.557 10:18:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.557 10:18:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.557 10:18:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.557 10:18:32 -- spdk/autotest.sh@48 -- # udevadm_pid=1047107 00:01:44.557 10:18:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.557 10:18:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.557 10:18:32 -- pm/common@17 -- # local monitor 00:01:44.557 10:18:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.557 10:18:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.557 10:18:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.557 10:18:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.557 10:18:32 -- pm/common@21 -- # date +%s 00:01:44.557 10:18:32 -- pm/common@21 -- # date +%s 00:01:44.557 10:18:32 -- pm/common@25 -- # sleep 1 00:01:44.557 10:18:32 -- pm/common@21 -- # date +%s 00:01:44.557 10:18:32 -- pm/common@21 -- # date +%s 00:01:44.557 10:18:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031512 00:01:44.557 10:18:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031512 00:01:44.557 10:18:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031512 00:01:44.557 10:18:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031512 00:01:44.557 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031512_collect-vmstat.pm.log 00:01:44.557 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031512_collect-cpu-load.pm.log 00:01:44.557 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031512_collect-cpu-temp.pm.log 00:01:44.557 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031512_collect-bmc-pm.bmc.pm.log 00:01:45.499 10:18:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:45.499 10:18:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:45.499 10:18:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:45.499 10:18:33 -- common/autotest_common.sh@10 -- # set +x 00:01:45.499 10:18:33 -- spdk/autotest.sh@59 -- # create_test_list 00:01:45.499 10:18:33 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:45.499 10:18:34 -- common/autotest_common.sh@10 -- # set +x 00:01:45.499 10:18:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.499 10:18:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.499 10:18:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.499 10:18:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.499 10:18:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.499 10:18:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.499 10:18:34 -- common/autotest_common.sh@1455 -- # uname 00:01:45.499 10:18:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:45.499 10:18:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.499 10:18:34 -- common/autotest_common.sh@1475 -- # uname 00:01:45.499 10:18:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:45.499 10:18:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.499 10:18:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.499 10:18:34 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.499 10:18:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.499 10:18:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.499 --rc lcov_branch_coverage=1 00:01:45.499 --rc lcov_function_coverage=1 00:01:45.499 --rc genhtml_branch_coverage=1 00:01:45.499 --rc genhtml_function_coverage=1 00:01:45.499 --rc genhtml_legend=1 00:01:45.499 --rc geninfo_all_blocks=1 00:01:45.499 ' 00:01:45.499 10:18:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.499 --rc lcov_branch_coverage=1 00:01:45.499 --rc lcov_function_coverage=1 00:01:45.499 --rc genhtml_branch_coverage=1 00:01:45.499 --rc genhtml_function_coverage=1 00:01:45.499 --rc genhtml_legend=1 00:01:45.499 --rc geninfo_all_blocks=1 00:01:45.499 ' 00:01:45.499 10:18:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.499 --rc lcov_branch_coverage=1 00:01:45.499 --rc lcov_function_coverage=1 00:01:45.499 --rc genhtml_branch_coverage=1 00:01:45.499 --rc genhtml_function_coverage=1 00:01:45.499 --rc genhtml_legend=1 00:01:45.499 --rc geninfo_all_blocks=1 00:01:45.499 --no-external' 00:01:45.499 10:18:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.499 --rc lcov_branch_coverage=1 00:01:45.499 --rc lcov_function_coverage=1 00:01:45.499 --rc genhtml_branch_coverage=1 00:01:45.499 --rc genhtml_function_coverage=1 00:01:45.499 --rc genhtml_legend=1 00:01:45.499 --rc geninfo_all_blocks=1 00:01:45.499 --no-external' 00:01:45.499 10:18:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:45.757 lcov: LCOV version 1.14 00:01:45.757 10:18:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:51.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:51.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:51.032 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:51.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:13.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:13.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:19.791 10:19:07 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:19.791 10:19:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:19.791 10:19:07 -- common/autotest_common.sh@10 -- # set +x 00:02:19.791 10:19:07 -- spdk/autotest.sh@91 -- # rm -f 00:02:19.791 10:19:07 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:19.791 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:19.791 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:19.791 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:19.791 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:19.791 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.051 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.051 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.051 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.051 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:02:20.051 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.051 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.051 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.051 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.051 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.051 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.051 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.051 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.051 10:19:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:20.051 10:19:08 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:20.051 10:19:08 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:20.051 10:19:08 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:20.051 10:19:08 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:20.051 10:19:08 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:20.051 10:19:08 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:20.309 10:19:08 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.309 10:19:08 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:20.309 10:19:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:20.309 10:19:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:20.309 10:19:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:20.309 10:19:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:20.309 10:19:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:20.310 10:19:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:20.310 No valid GPT data, bailing 00:02:20.310 10:19:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:20.310 10:19:08 -- scripts/common.sh@391 -- # pt= 00:02:20.310 10:19:08 -- scripts/common.sh@392 -- # return 1 00:02:20.310 10:19:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:20.310 1+0 records in 00:02:20.310 1+0 records out 00:02:20.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0024974 s, 420 MB/s 00:02:20.310 10:19:08 -- spdk/autotest.sh@118 -- # sync 00:02:20.310 10:19:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:20.310 10:19:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:20.310 10:19:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:22.210 10:19:10 -- spdk/autotest.sh@124 -- # uname -s 00:02:22.210 10:19:10 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:22.210 10:19:10 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.210 10:19:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.210 10:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.210 10:19:10 -- common/autotest_common.sh@10 -- # set +x 00:02:22.210 ************************************ 00:02:22.210 START TEST setup.sh 00:02:22.210 ************************************ 00:02:22.210 10:19:10 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.210 * Looking for test storage... 00:02:22.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.210 10:19:10 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:22.210 10:19:10 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:22.210 10:19:10 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.210 10:19:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.210 10:19:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.210 10:19:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:22.468 ************************************ 00:02:22.468 START TEST acl 00:02:22.468 ************************************ 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.468 * Looking for test storage... 00:02:22.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.468 10:19:10 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.468 10:19:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.468 10:19:10 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:22.468 10:19:10 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:22.468 10:19:10 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:22.468 10:19:10 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:22.468 10:19:10 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:22.468 10:19:10 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:22.468 10:19:10 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.841 10:19:12 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:23.841 10:19:12 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:23.841 10:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.841 10:19:12 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:23.841 10:19:12 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.841 10:19:12 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:25.223 Hugepages 00:02:25.223 node hugesize free / total 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 00:02:25.223 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:25.223 10:19:13 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:25.223 10:19:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:25.223 10:19:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:25.223 10:19:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:25.223 ************************************ 00:02:25.223 START TEST denied 00:02:25.223 ************************************ 00:02:25.223 10:19:13 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:25.224 10:19:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:02:25.224 10:19:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:25.224 10:19:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:02:25.224 10:19:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.224 10:19:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:26.599 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.599 10:19:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.173 00:02:29.173 real 0m3.993s 00:02:29.173 user 0m1.072s 00:02:29.173 sys 0m1.969s 00:02:29.173 10:19:17 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:29.173 10:19:17 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:29.173 ************************************ 00:02:29.173 END TEST denied 00:02:29.173 ************************************ 00:02:29.173 10:19:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:29.173 10:19:17 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:29.173 10:19:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:29.173 10:19:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:29.173 10:19:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:29.173 ************************************ 00:02:29.173 START TEST allowed 00:02:29.173 ************************************ 00:02:29.173 10:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:29.173 10:19:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:02:29.173 10:19:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:29.173 10:19:17 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:02:29.173 10:19:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.173 10:19:17 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:31.708 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:31.708 10:19:19 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:31.708 10:19:19 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:31.708 10:19:19 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:31.708 10:19:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.708 10:19:19 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.086 00:02:33.086 real 0m3.919s 00:02:33.086 user 0m1.039s 00:02:33.086 sys 0m1.784s 00:02:33.086 10:19:21 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:33.086 10:19:21 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:33.086 ************************************ 00:02:33.086 END TEST allowed 00:02:33.086 ************************************ 00:02:33.086 10:19:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:33.086 00:02:33.086 real 0m10.788s 00:02:33.086 user 0m3.269s 00:02:33.086 sys 0m5.537s 00:02:33.086 10:19:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:33.086 10:19:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:33.086 ************************************ 00:02:33.086 END TEST acl 00:02:33.086 ************************************ 00:02:33.086 10:19:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:33.086 10:19:21 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.086 10:19:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:33.086 10:19:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:33.086 10:19:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:33.086 ************************************ 00:02:33.086 START TEST hugepages 00:02:33.086 ************************************ 00:02:33.086 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.346 * Looking for test storage... 00:02:33.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44763044 kB' 'MemAvailable: 48249940 kB' 'Buffers: 2704 kB' 'Cached: 9330880 kB' 'SwapCached: 0 kB' 'Active: 6306644 kB' 'Inactive: 3507644 kB' 'Active(anon): 5916872 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484088 kB' 'Mapped: 165360 kB' 'Shmem: 5436168 kB' 'KReclaimable: 166208 kB' 'Slab: 490060 kB' 'SReclaimable: 166208 kB' 'SUnreclaim: 323852 kB' 'KernelStack: 12992 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7075964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.346 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.347 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:33.348 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:33.348 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:33.348 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:33.348 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:33.348 ************************************ 00:02:33.348 START TEST default_setup 00:02:33.348 ************************************ 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.348 10:19:21 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.723 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:34.723 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:34.723 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:35.656 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46867660 kB' 'MemAvailable: 50354484 kB' 'Buffers: 2704 kB' 'Cached: 9330968 kB' 'SwapCached: 0 kB' 'Active: 6323852 kB' 'Inactive: 3507644 kB' 'Active(anon): 5934080 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501064 kB' 'Mapped: 165360 kB' 'Shmem: 5436256 kB' 'KReclaimable: 166064 kB' 'Slab: 489756 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323692 kB' 'KernelStack: 12816 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.919 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.920 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46867092 kB' 'MemAvailable: 50353916 kB' 'Buffers: 2704 kB' 'Cached: 9330972 kB' 'SwapCached: 0 kB' 'Active: 6323444 kB' 'Inactive: 3507644 kB' 'Active(anon): 5933672 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500616 kB' 'Mapped: 165312 kB' 'Shmem: 5436260 kB' 'KReclaimable: 166064 kB' 'Slab: 489768 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323704 kB' 'KernelStack: 12880 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.921 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46870068 kB' 'MemAvailable: 50356892 kB' 'Buffers: 2704 kB' 'Cached: 9330988 kB' 'SwapCached: 0 kB' 'Active: 6323884 kB' 'Inactive: 3507644 kB' 'Active(anon): 5934112 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501136 kB' 'Mapped: 165396 kB' 'Shmem: 5436276 kB' 'KReclaimable: 166064 kB' 'Slab: 489856 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323792 kB' 'KernelStack: 12880 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.922 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.923 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:35.924 nr_hugepages=1024 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.924 resv_hugepages=0 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.924 surplus_hugepages=0 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.924 anon_hugepages=0 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.924 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46871684 kB' 'MemAvailable: 50358508 kB' 'Buffers: 2704 kB' 'Cached: 9331012 kB' 'SwapCached: 0 kB' 'Active: 6323916 kB' 'Inactive: 3507644 kB' 'Active(anon): 5934144 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501136 kB' 'Mapped: 165396 kB' 'Shmem: 5436300 kB' 'KReclaimable: 166064 kB' 'Slab: 489856 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323792 kB' 'KernelStack: 12880 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.925 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22121636 kB' 'MemUsed: 10708248 kB' 'SwapCached: 0 kB' 'Active: 4451200 kB' 'Inactive: 3350280 kB' 'Active(anon): 4317524 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531112 kB' 'Mapped: 72548 kB' 'AnonPages: 273548 kB' 'Shmem: 4047156 kB' 'KernelStack: 5608 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 198044 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 130208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.926 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:35.927 node0=1024 expecting 1024 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:35.927 00:02:35.927 real 0m2.635s 00:02:35.927 user 0m0.690s 00:02:35.927 sys 0m0.974s 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:35.927 10:19:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:35.927 ************************************ 00:02:35.927 END TEST default_setup 00:02:35.927 ************************************ 00:02:35.927 10:19:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:35.927 10:19:24 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:35.927 10:19:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:35.927 10:19:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.927 10:19:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.927 ************************************ 00:02:35.928 START TEST per_node_1G_alloc 00:02:35.928 ************************************ 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.928 10:19:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.311 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.311 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.311 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.311 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.311 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.311 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.312 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.312 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.312 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.312 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.312 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.312 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.312 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.312 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.312 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.312 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.312 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46850904 kB' 'MemAvailable: 50337728 kB' 'Buffers: 2704 kB' 'Cached: 9331092 kB' 'SwapCached: 0 kB' 'Active: 6329800 kB' 'Inactive: 3507644 kB' 'Active(anon): 5940028 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506852 kB' 'Mapped: 166388 kB' 'Shmem: 5436380 kB' 'KReclaimable: 166064 kB' 'Slab: 490052 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323988 kB' 'KernelStack: 12896 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7099764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196488 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46856632 kB' 'MemAvailable: 50343456 kB' 'Buffers: 2704 kB' 'Cached: 9331092 kB' 'SwapCached: 0 kB' 'Active: 6330124 kB' 'Inactive: 3507644 kB' 'Active(anon): 5940352 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507176 kB' 'Mapped: 166416 kB' 'Shmem: 5436380 kB' 'KReclaimable: 166064 kB' 'Slab: 490028 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323964 kB' 'KernelStack: 12880 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7099784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196440 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46852048 kB' 'MemAvailable: 50338872 kB' 'Buffers: 2704 kB' 'Cached: 9331092 kB' 'SwapCached: 0 kB' 'Active: 6326624 kB' 'Inactive: 3507644 kB' 'Active(anon): 5936852 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503644 kB' 'Mapped: 166340 kB' 'Shmem: 5436380 kB' 'KReclaimable: 166064 kB' 'Slab: 490028 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323964 kB' 'KernelStack: 12880 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7097152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.317 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.579 nr_hugepages=1024 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.579 resv_hugepages=0 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.579 surplus_hugepages=0 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.579 anon_hugepages=0 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.579 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46851600 kB' 'MemAvailable: 50338424 kB' 'Buffers: 2704 kB' 'Cached: 9331132 kB' 'SwapCached: 0 kB' 'Active: 6329684 kB' 'Inactive: 3507644 kB' 'Active(anon): 5939912 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506768 kB' 'Mapped: 166692 kB' 'Shmem: 5436420 kB' 'KReclaimable: 166064 kB' 'Slab: 490020 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 323956 kB' 'KernelStack: 12896 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7099828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196440 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.580 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23152744 kB' 'MemUsed: 9677140 kB' 'SwapCached: 0 kB' 'Active: 4450944 kB' 'Inactive: 3350280 kB' 'Active(anon): 4317268 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531228 kB' 'Mapped: 72720 kB' 'AnonPages: 273240 kB' 'Shmem: 4047272 kB' 'KernelStack: 5608 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 198012 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 130176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.581 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.582 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 23699160 kB' 'MemUsed: 4012668 kB' 'SwapCached: 0 kB' 'Active: 1873244 kB' 'Inactive: 157364 kB' 'Active(anon): 1617148 kB' 'Inactive(anon): 0 kB' 'Active(file): 256096 kB' 'Inactive(file): 157364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1802636 kB' 'Mapped: 92856 kB' 'AnonPages: 228040 kB' 'Shmem: 1389176 kB' 'KernelStack: 7288 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98228 kB' 'Slab: 292008 kB' 'SReclaimable: 98228 kB' 'SUnreclaim: 193780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.583 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:37.584 node0=512 expecting 512 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:37.584 node1=512 expecting 512 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:37.584 00:02:37.584 real 0m1.511s 00:02:37.584 user 0m0.622s 00:02:37.584 sys 0m0.853s 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:37.584 10:19:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:37.584 ************************************ 00:02:37.584 END TEST per_node_1G_alloc 00:02:37.584 ************************************ 00:02:37.584 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:37.584 10:19:25 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:37.584 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:37.584 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.584 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.584 ************************************ 00:02:37.584 START TEST even_2G_alloc 00:02:37.584 ************************************ 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:37.584 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.585 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:37.585 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:37.585 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:37.585 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.585 10:19:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.963 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.963 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.963 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.963 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.963 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.963 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.963 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.963 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.963 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.963 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.963 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.963 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.963 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.963 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.963 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.963 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.963 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46850472 kB' 'MemAvailable: 50337296 kB' 'Buffers: 2704 kB' 'Cached: 9331224 kB' 'SwapCached: 0 kB' 'Active: 6325776 kB' 'Inactive: 3507644 kB' 'Active(anon): 5936004 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502684 kB' 'Mapped: 165596 kB' 'Shmem: 5436512 kB' 'KReclaimable: 166064 kB' 'Slab: 490108 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324044 kB' 'KernelStack: 12928 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.963 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46850472 kB' 'MemAvailable: 50337296 kB' 'Buffers: 2704 kB' 'Cached: 9331228 kB' 'SwapCached: 0 kB' 'Active: 6325004 kB' 'Inactive: 3507644 kB' 'Active(anon): 5935232 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501924 kB' 'Mapped: 165548 kB' 'Shmem: 5436516 kB' 'KReclaimable: 166064 kB' 'Slab: 490108 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324044 kB' 'KernelStack: 12944 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.967 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46850724 kB' 'MemAvailable: 50337548 kB' 'Buffers: 2704 kB' 'Cached: 9331228 kB' 'SwapCached: 0 kB' 'Active: 6324212 kB' 'Inactive: 3507644 kB' 'Active(anon): 5934440 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501112 kB' 'Mapped: 165428 kB' 'Shmem: 5436516 kB' 'KReclaimable: 166064 kB' 'Slab: 490108 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324044 kB' 'KernelStack: 12912 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.968 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.969 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.970 nr_hugepages=1024 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.970 resv_hugepages=0 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.970 surplus_hugepages=0 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.970 anon_hugepages=0 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.970 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46852372 kB' 'MemAvailable: 50339196 kB' 'Buffers: 2704 kB' 'Cached: 9331268 kB' 'SwapCached: 0 kB' 'Active: 6324536 kB' 'Inactive: 3507644 kB' 'Active(anon): 5934764 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501400 kB' 'Mapped: 165428 kB' 'Shmem: 5436556 kB' 'KReclaimable: 166064 kB' 'Slab: 490108 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324044 kB' 'KernelStack: 12912 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7093968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.971 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.972 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23149032 kB' 'MemUsed: 9680852 kB' 'SwapCached: 0 kB' 'Active: 4451144 kB' 'Inactive: 3350280 kB' 'Active(anon): 4317468 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531352 kB' 'Mapped: 72576 kB' 'AnonPages: 273220 kB' 'Shmem: 4047396 kB' 'KernelStack: 5592 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 198224 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 130388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.973 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.974 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 23704292 kB' 'MemUsed: 4007536 kB' 'SwapCached: 0 kB' 'Active: 1873396 kB' 'Inactive: 157364 kB' 'Active(anon): 1617300 kB' 'Inactive(anon): 0 kB' 'Active(file): 256096 kB' 'Inactive(file): 157364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1802640 kB' 'Mapped: 92852 kB' 'AnonPages: 228144 kB' 'Shmem: 1389180 kB' 'KernelStack: 7304 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98228 kB' 'Slab: 291884 kB' 'SReclaimable: 98228 kB' 'SUnreclaim: 193656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.975 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.976 node0=512 expecting 512 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.976 node1=512 expecting 512 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.976 00:02:38.976 real 0m1.501s 00:02:38.976 user 0m0.662s 00:02:38.976 sys 0m0.802s 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:38.976 10:19:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.976 ************************************ 00:02:38.976 END TEST even_2G_alloc 00:02:38.976 ************************************ 00:02:38.976 10:19:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:38.976 10:19:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:38.976 10:19:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.976 10:19:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.976 10:19:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:39.233 ************************************ 00:02:39.233 START TEST odd_alloc 00:02:39.233 ************************************ 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.233 10:19:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.173 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.173 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.173 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.173 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.173 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.173 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.173 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.173 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.173 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.173 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.173 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.173 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.173 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.173 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.173 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.173 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.173 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46858076 kB' 'MemAvailable: 50344900 kB' 'Buffers: 2704 kB' 'Cached: 9331360 kB' 'SwapCached: 0 kB' 'Active: 6321720 kB' 'Inactive: 3507644 kB' 'Active(anon): 5931948 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498596 kB' 'Mapped: 164468 kB' 'Shmem: 5436648 kB' 'KReclaimable: 166064 kB' 'Slab: 490140 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324076 kB' 'KernelStack: 12864 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7080264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.434 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.435 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.436 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46860248 kB' 'MemAvailable: 50347072 kB' 'Buffers: 2704 kB' 'Cached: 9331364 kB' 'SwapCached: 0 kB' 'Active: 6321376 kB' 'Inactive: 3507644 kB' 'Active(anon): 5931604 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498208 kB' 'Mapped: 164460 kB' 'Shmem: 5436652 kB' 'KReclaimable: 166064 kB' 'Slab: 490140 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324076 kB' 'KernelStack: 12880 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7080284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.437 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.438 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46859908 kB' 'MemAvailable: 50346732 kB' 'Buffers: 2704 kB' 'Cached: 9331380 kB' 'SwapCached: 0 kB' 'Active: 6321432 kB' 'Inactive: 3507644 kB' 'Active(anon): 5931660 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498260 kB' 'Mapped: 164460 kB' 'Shmem: 5436668 kB' 'KReclaimable: 166064 kB' 'Slab: 490200 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324136 kB' 'KernelStack: 12864 kB' 'PageTables: 7376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7080304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.439 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.440 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.441 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:40.442 nr_hugepages=1025 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.442 resv_hugepages=0 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.442 surplus_hugepages=0 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.442 anon_hugepages=0 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46859908 kB' 'MemAvailable: 50346732 kB' 'Buffers: 2704 kB' 'Cached: 9331400 kB' 'SwapCached: 0 kB' 'Active: 6321484 kB' 'Inactive: 3507644 kB' 'Active(anon): 5931712 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498292 kB' 'Mapped: 164460 kB' 'Shmem: 5436688 kB' 'KReclaimable: 166064 kB' 'Slab: 490192 kB' 'SReclaimable: 166064 kB' 'SUnreclaim: 324128 kB' 'KernelStack: 12880 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7080324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.442 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.443 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.444 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.704 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23147504 kB' 'MemUsed: 9682380 kB' 'SwapCached: 0 kB' 'Active: 4450616 kB' 'Inactive: 3350280 kB' 'Active(anon): 4316940 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531472 kB' 'Mapped: 71696 kB' 'AnonPages: 272608 kB' 'Shmem: 4047516 kB' 'KernelStack: 5624 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 198348 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 130512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.705 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 23711396 kB' 'MemUsed: 4000432 kB' 'SwapCached: 0 kB' 'Active: 1870844 kB' 'Inactive: 157364 kB' 'Active(anon): 1614748 kB' 'Inactive(anon): 0 kB' 'Active(file): 256096 kB' 'Inactive(file): 157364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1802656 kB' 'Mapped: 92764 kB' 'AnonPages: 225672 kB' 'Shmem: 1389196 kB' 'KernelStack: 7256 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98228 kB' 'Slab: 291844 kB' 'SReclaimable: 98228 kB' 'SUnreclaim: 193616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.706 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.707 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:40.708 node0=512 expecting 513 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:40.708 node1=513 expecting 512 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:40.708 00:02:40.708 real 0m1.507s 00:02:40.708 user 0m0.606s 00:02:40.708 sys 0m0.864s 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:40.708 10:19:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.708 ************************************ 00:02:40.708 END TEST odd_alloc 00:02:40.708 ************************************ 00:02:40.708 10:19:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:40.708 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:40.708 10:19:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:40.708 10:19:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.708 10:19:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.708 ************************************ 00:02:40.708 START TEST custom_alloc 00:02:40.708 ************************************ 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.708 10:19:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:42.090 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.090 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.090 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.090 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.090 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.090 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.090 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.090 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.090 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.090 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:42.090 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.090 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.090 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.090 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.090 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.090 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.090 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45804384 kB' 'MemAvailable: 49291236 kB' 'Buffers: 2704 kB' 'Cached: 9331488 kB' 'SwapCached: 0 kB' 'Active: 6327480 kB' 'Inactive: 3507644 kB' 'Active(anon): 5937708 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504300 kB' 'Mapped: 165292 kB' 'Shmem: 5436776 kB' 'KReclaimable: 166120 kB' 'Slab: 490024 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323904 kB' 'KernelStack: 12864 kB' 'PageTables: 7356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7086512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.090 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.091 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45804984 kB' 'MemAvailable: 49291836 kB' 'Buffers: 2704 kB' 'Cached: 9331492 kB' 'SwapCached: 0 kB' 'Active: 6327644 kB' 'Inactive: 3507644 kB' 'Active(anon): 5937872 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504440 kB' 'Mapped: 165316 kB' 'Shmem: 5436780 kB' 'KReclaimable: 166120 kB' 'Slab: 489992 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323872 kB' 'KernelStack: 12880 kB' 'PageTables: 7384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7086532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.092 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.093 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45804784 kB' 'MemAvailable: 49291636 kB' 'Buffers: 2704 kB' 'Cached: 9331504 kB' 'SwapCached: 0 kB' 'Active: 6321924 kB' 'Inactive: 3507644 kB' 'Active(anon): 5932152 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498752 kB' 'Mapped: 164880 kB' 'Shmem: 5436792 kB' 'KReclaimable: 166120 kB' 'Slab: 490048 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323928 kB' 'KernelStack: 12864 kB' 'PageTables: 7360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7080432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.094 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.095 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.096 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:42.097 nr_hugepages=1536 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.097 resv_hugepages=0 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.097 surplus_hugepages=0 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.097 anon_hugepages=0 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45805240 kB' 'MemAvailable: 49292092 kB' 'Buffers: 2704 kB' 'Cached: 9331504 kB' 'SwapCached: 0 kB' 'Active: 6322400 kB' 'Inactive: 3507644 kB' 'Active(anon): 5932628 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499184 kB' 'Mapped: 164472 kB' 'Shmem: 5436792 kB' 'KReclaimable: 166120 kB' 'Slab: 490048 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323928 kB' 'KernelStack: 12864 kB' 'PageTables: 7320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7080452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.097 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.098 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.099 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23150332 kB' 'MemUsed: 9679552 kB' 'SwapCached: 0 kB' 'Active: 4450744 kB' 'Inactive: 3350280 kB' 'Active(anon): 4317068 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531536 kB' 'Mapped: 71708 kB' 'AnonPages: 272668 kB' 'Shmem: 4047580 kB' 'KernelStack: 5608 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67908 kB' 'Slab: 198300 kB' 'SReclaimable: 67908 kB' 'SUnreclaim: 130392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.100 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.101 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 22655616 kB' 'MemUsed: 5056212 kB' 'SwapCached: 0 kB' 'Active: 1871532 kB' 'Inactive: 157364 kB' 'Active(anon): 1615436 kB' 'Inactive(anon): 0 kB' 'Active(file): 256096 kB' 'Inactive(file): 157364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1802736 kB' 'Mapped: 92764 kB' 'AnonPages: 226336 kB' 'Shmem: 1389276 kB' 'KernelStack: 7272 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98212 kB' 'Slab: 291740 kB' 'SReclaimable: 98212 kB' 'SUnreclaim: 193528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.102 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:42.103 node0=512 expecting 512 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:42.103 node1=1024 expecting 1024 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:42.103 00:02:42.103 real 0m1.532s 00:02:42.103 user 0m0.657s 00:02:42.103 sys 0m0.842s 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:42.103 10:19:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:42.103 ************************************ 00:02:42.103 END TEST custom_alloc 00:02:42.103 ************************************ 00:02:42.103 10:19:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:42.103 10:19:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:42.103 10:19:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:42.103 10:19:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.103 10:19:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.361 ************************************ 00:02:42.361 START TEST no_shrink_alloc 00:02:42.361 ************************************ 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.361 10:19:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.297 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.297 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.297 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.297 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.297 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.297 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.297 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.297 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.297 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.297 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.297 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.297 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.297 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.297 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.297 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.297 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.297 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46846120 kB' 'MemAvailable: 50332972 kB' 'Buffers: 2704 kB' 'Cached: 9331616 kB' 'SwapCached: 0 kB' 'Active: 6323156 kB' 'Inactive: 3507644 kB' 'Active(anon): 5933384 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499620 kB' 'Mapped: 164664 kB' 'Shmem: 5436904 kB' 'KReclaimable: 166120 kB' 'Slab: 490072 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323952 kB' 'KernelStack: 12880 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7080848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.561 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.562 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46845616 kB' 'MemAvailable: 50332468 kB' 'Buffers: 2704 kB' 'Cached: 9331616 kB' 'SwapCached: 0 kB' 'Active: 6324004 kB' 'Inactive: 3507644 kB' 'Active(anon): 5934232 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500356 kB' 'Mapped: 164616 kB' 'Shmem: 5436904 kB' 'KReclaimable: 166120 kB' 'Slab: 490076 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323956 kB' 'KernelStack: 12960 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7080496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.563 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.564 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46845284 kB' 'MemAvailable: 50332136 kB' 'Buffers: 2704 kB' 'Cached: 9331616 kB' 'SwapCached: 0 kB' 'Active: 6323148 kB' 'Inactive: 3507644 kB' 'Active(anon): 5933376 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499668 kB' 'Mapped: 164492 kB' 'Shmem: 5436904 kB' 'KReclaimable: 166120 kB' 'Slab: 490104 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323984 kB' 'KernelStack: 12912 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7080524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.565 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.566 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.567 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.567 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.567 10:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.567 nr_hugepages=1024 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.567 resv_hugepages=0 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.567 surplus_hugepages=0 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.567 anon_hugepages=0 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46845284 kB' 'MemAvailable: 50332136 kB' 'Buffers: 2704 kB' 'Cached: 9331616 kB' 'SwapCached: 0 kB' 'Active: 6322580 kB' 'Inactive: 3507644 kB' 'Active(anon): 5932808 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499056 kB' 'Mapped: 164492 kB' 'Shmem: 5436904 kB' 'KReclaimable: 166120 kB' 'Slab: 490104 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323984 kB' 'KernelStack: 12912 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7080548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.567 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.568 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22107452 kB' 'MemUsed: 10722432 kB' 'SwapCached: 0 kB' 'Active: 4450972 kB' 'Inactive: 3350280 kB' 'Active(anon): 4317296 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531552 kB' 'Mapped: 71732 kB' 'AnonPages: 272860 kB' 'Shmem: 4047596 kB' 'KernelStack: 5624 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67908 kB' 'Slab: 198332 kB' 'SReclaimable: 67908 kB' 'SUnreclaim: 130424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.569 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:43.570 node0=1024 expecting 1024 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.570 10:19:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.972 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.972 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.972 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.972 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.972 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.972 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.972 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.972 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.972 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.972 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.972 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.972 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.972 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.972 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.972 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.972 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.972 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.972 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.972 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46858628 kB' 'MemAvailable: 50345480 kB' 'Buffers: 2704 kB' 'Cached: 9331728 kB' 'SwapCached: 0 kB' 'Active: 6322600 kB' 'Inactive: 3507644 kB' 'Active(anon): 5932828 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499072 kB' 'Mapped: 164580 kB' 'Shmem: 5437016 kB' 'KReclaimable: 166120 kB' 'Slab: 489940 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323820 kB' 'KernelStack: 12896 kB' 'PageTables: 7348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7081096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.973 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46858376 kB' 'MemAvailable: 50345228 kB' 'Buffers: 2704 kB' 'Cached: 9331732 kB' 'SwapCached: 0 kB' 'Active: 6323288 kB' 'Inactive: 3507644 kB' 'Active(anon): 5933516 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499744 kB' 'Mapped: 164572 kB' 'Shmem: 5437020 kB' 'KReclaimable: 166120 kB' 'Slab: 489928 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323808 kB' 'KernelStack: 12928 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7081112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.975 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46858696 kB' 'MemAvailable: 50345548 kB' 'Buffers: 2704 kB' 'Cached: 9331736 kB' 'SwapCached: 0 kB' 'Active: 6322680 kB' 'Inactive: 3507644 kB' 'Active(anon): 5932908 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499164 kB' 'Mapped: 164572 kB' 'Shmem: 5437024 kB' 'KReclaimable: 166120 kB' 'Slab: 489928 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323808 kB' 'KernelStack: 12928 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7081136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.976 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.977 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.978 nr_hugepages=1024 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.978 resv_hugepages=0 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.978 surplus_hugepages=0 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.978 anon_hugepages=0 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46859112 kB' 'MemAvailable: 50345964 kB' 'Buffers: 2704 kB' 'Cached: 9331772 kB' 'SwapCached: 0 kB' 'Active: 6323008 kB' 'Inactive: 3507644 kB' 'Active(anon): 5933236 kB' 'Inactive(anon): 0 kB' 'Active(file): 389772 kB' 'Inactive(file): 3507644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499452 kB' 'Mapped: 164572 kB' 'Shmem: 5437060 kB' 'KReclaimable: 166120 kB' 'Slab: 489928 kB' 'SReclaimable: 166120 kB' 'SUnreclaim: 323808 kB' 'KernelStack: 12928 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7081156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1277532 kB' 'DirectMap2M: 13322240 kB' 'DirectMap1G: 54525952 kB' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.978 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.979 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.980 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:44.980 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.980 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:44.980 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.980 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.980 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22122368 kB' 'MemUsed: 10707516 kB' 'SwapCached: 0 kB' 'Active: 4450612 kB' 'Inactive: 3350280 kB' 'Active(anon): 4316936 kB' 'Inactive(anon): 0 kB' 'Active(file): 133676 kB' 'Inactive(file): 3350280 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7531552 kB' 'Mapped: 71812 kB' 'AnonPages: 272496 kB' 'Shmem: 4047596 kB' 'KernelStack: 5576 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67908 kB' 'Slab: 198316 kB' 'SReclaimable: 67908 kB' 'SUnreclaim: 130408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:45.240 node0=1024 expecting 1024 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:45.240 00:02:45.240 real 0m2.887s 00:02:45.240 user 0m1.158s 00:02:45.240 sys 0m1.650s 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.240 10:19:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:45.240 ************************************ 00:02:45.240 END TEST no_shrink_alloc 00:02:45.240 ************************************ 00:02:45.240 10:19:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:45.240 10:19:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:45.240 00:02:45.240 real 0m11.969s 00:02:45.240 user 0m4.553s 00:02:45.240 sys 0m6.245s 00:02:45.240 10:19:33 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.240 10:19:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.240 ************************************ 00:02:45.240 END TEST hugepages 00:02:45.240 ************************************ 00:02:45.240 10:19:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:45.240 10:19:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:45.240 10:19:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.240 10:19:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.240 10:19:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:45.240 ************************************ 00:02:45.240 START TEST driver 00:02:45.240 ************************************ 00:02:45.240 10:19:33 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:45.240 * Looking for test storage... 00:02:45.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.240 10:19:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:45.240 10:19:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.240 10:19:33 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.773 10:19:36 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:47.773 10:19:36 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.773 10:19:36 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.773 10:19:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:47.773 ************************************ 00:02:47.773 START TEST guess_driver 00:02:47.773 ************************************ 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:47.773 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:48.031 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:48.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:48.031 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:48.031 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:48.031 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:48.032 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:48.032 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:48.032 Looking for driver=vfio-pci 00:02:48.032 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:48.032 10:19:36 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:48.032 10:19:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.032 10:19:36 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.409 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.410 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.347 10:19:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.884 00:02:52.884 real 0m4.900s 00:02:52.884 user 0m1.095s 00:02:52.884 sys 0m1.850s 00:02:52.884 10:19:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.884 10:19:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:52.884 ************************************ 00:02:52.884 END TEST guess_driver 00:02:52.884 ************************************ 00:02:52.884 10:19:41 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:02:52.884 00:02:52.884 real 0m7.612s 00:02:52.884 user 0m1.693s 00:02:52.884 sys 0m2.929s 00:02:52.884 10:19:41 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.884 10:19:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:52.884 ************************************ 00:02:52.884 END TEST driver 00:02:52.884 ************************************ 00:02:52.884 10:19:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:52.885 10:19:41 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:52.885 10:19:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.885 10:19:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.885 10:19:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:52.885 ************************************ 00:02:52.885 START TEST devices 00:02:52.885 ************************************ 00:02:52.885 10:19:41 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:52.885 * Looking for test storage... 00:02:52.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.885 10:19:41 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:52.885 10:19:41 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:52.885 10:19:41 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.885 10:19:41 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.292 10:19:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:54.292 10:19:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:54.293 10:19:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:54.293 10:19:42 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:54.293 No valid GPT data, bailing 00:02:54.293 10:19:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.293 10:19:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:54.293 10:19:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:54.293 10:19:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:54.293 10:19:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:54.293 10:19:42 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:54.293 10:19:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:54.293 10:19:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.293 10:19:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.293 10:19:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:54.293 ************************************ 00:02:54.293 START TEST nvme_mount 00:02:54.293 ************************************ 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:54.293 10:19:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:55.671 Creating new GPT entries in memory. 00:02:55.671 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:55.671 other utilities. 00:02:55.671 10:19:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:55.671 10:19:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.671 10:19:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:55.671 10:19:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:55.671 10:19:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:56.610 Creating new GPT entries in memory. 00:02:56.610 The operation has completed successfully. 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1067074 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.610 10:19:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.545 10:19:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.545 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.545 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:57.545 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.545 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.545 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.545 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:57.546 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.546 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.805 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:57.805 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:57.805 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:57.805 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:57.805 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:58.063 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:58.063 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:58.063 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:58.063 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:58.063 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.064 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.998 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.257 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.195 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:00.454 10:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:00.714 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:00.714 00:03:00.714 real 0m6.248s 00:03:00.714 user 0m1.452s 00:03:00.714 sys 0m2.349s 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.714 10:19:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:00.714 ************************************ 00:03:00.714 END TEST nvme_mount 00:03:00.714 ************************************ 00:03:00.714 10:19:49 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:00.714 10:19:49 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:00.714 10:19:49 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.714 10:19:49 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.714 10:19:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:00.714 ************************************ 00:03:00.714 START TEST dm_mount 00:03:00.714 ************************************ 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:00.714 10:19:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:01.652 Creating new GPT entries in memory. 00:03:01.652 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:01.652 other utilities. 00:03:01.652 10:19:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:01.652 10:19:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.652 10:19:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:01.652 10:19:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:01.652 10:19:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:02.588 Creating new GPT entries in memory. 00:03:02.588 The operation has completed successfully. 00:03:02.588 10:19:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:02.588 10:19:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.588 10:19:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:02.588 10:19:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:02.588 10:19:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:03.962 The operation has completed successfully. 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1069466 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.962 10:19:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.896 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.154 10:19:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.086 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:06.345 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:06.604 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.604 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:06.604 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:06.604 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:06.604 10:19:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:06.604 00:03:06.604 real 0m5.845s 00:03:06.604 user 0m0.978s 00:03:06.604 sys 0m1.709s 00:03:06.604 10:19:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.604 10:19:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:06.604 ************************************ 00:03:06.604 END TEST dm_mount 00:03:06.604 ************************************ 00:03:06.604 10:19:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.604 10:19:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:06.863 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:06.863 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:06.863 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:06.863 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.863 10:19:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:06.863 00:03:06.863 real 0m13.948s 00:03:06.863 user 0m3.034s 00:03:06.863 sys 0m5.074s 00:03:06.863 10:19:55 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.863 10:19:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:06.863 ************************************ 00:03:06.863 END TEST devices 00:03:06.863 ************************************ 00:03:06.863 10:19:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.863 00:03:06.863 real 0m44.569s 00:03:06.863 user 0m12.654s 00:03:06.863 sys 0m19.948s 00:03:06.863 10:19:55 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.863 10:19:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.863 ************************************ 00:03:06.863 END TEST setup.sh 00:03:06.863 ************************************ 00:03:06.863 10:19:55 -- common/autotest_common.sh@1142 -- # return 0 00:03:06.863 10:19:55 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:08.239 Hugepages 00:03:08.239 node hugesize free / total 00:03:08.239 node0 1048576kB 0 / 0 00:03:08.239 node0 2048kB 2048 / 2048 00:03:08.239 node1 1048576kB 0 / 0 00:03:08.239 node1 2048kB 0 / 0 00:03:08.239 00:03:08.239 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.239 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:08.239 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:08.239 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:08.239 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:08.239 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:08.239 10:19:56 -- spdk/autotest.sh@130 -- # uname -s 00:03:08.239 10:19:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:08.239 10:19:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:08.239 10:19:56 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.620 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:09.620 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:09.620 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:10.558 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.558 10:19:59 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:11.495 10:20:00 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:11.495 10:20:00 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:11.495 10:20:00 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:11.495 10:20:00 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:11.495 10:20:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:11.495 10:20:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:11.495 10:20:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:11.495 10:20:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:11.495 10:20:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:11.753 10:20:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:11.753 10:20:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:11.753 10:20:00 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.128 Waiting for block devices as requested 00:03:13.129 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:13.129 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:13.129 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:13.129 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:13.129 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:13.388 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:13.388 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:13.388 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:13.388 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:13.649 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:13.649 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:13.909 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:13.909 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:13.909 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:13.909 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:14.169 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:14.169 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:14.169 10:20:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:14.169 10:20:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:03:14.169 10:20:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:14.169 10:20:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:14.169 10:20:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:14.169 10:20:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:14.169 10:20:02 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:14.169 10:20:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:14.169 10:20:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:14.169 10:20:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:14.169 10:20:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:14.169 10:20:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:14.169 10:20:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:14.169 10:20:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:14.169 10:20:02 -- common/autotest_common.sh@1557 -- # continue 00:03:14.169 10:20:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:14.169 10:20:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:14.169 10:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:14.169 10:20:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:14.169 10:20:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:14.169 10:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:14.169 10:20:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.543 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:15.543 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.543 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:15.543 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:15.543 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:15.543 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:15.543 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:15.803 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:15.803 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:15.803 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:16.742 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.742 10:20:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:16.742 10:20:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:16.742 10:20:05 -- common/autotest_common.sh@10 -- # set +x 00:03:16.742 10:20:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:16.742 10:20:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:16.742 10:20:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:16.742 10:20:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:16.742 10:20:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:16.742 10:20:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:16.742 10:20:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:16.742 10:20:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:16.742 10:20:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.742 10:20:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.742 10:20:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:16.742 10:20:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:16.742 10:20:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:16.742 10:20:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:16.742 10:20:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:16.742 10:20:05 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:16.742 10:20:05 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:16.742 10:20:05 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:16.742 10:20:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:03:16.742 10:20:05 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:03:16.742 10:20:05 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1074649 00:03:16.742 10:20:05 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:16.742 10:20:05 -- common/autotest_common.sh@1598 -- # waitforlisten 1074649 00:03:16.742 10:20:05 -- common/autotest_common.sh@829 -- # '[' -z 1074649 ']' 00:03:16.742 10:20:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:16.742 10:20:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:16.742 10:20:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:16.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:16.742 10:20:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:16.742 10:20:05 -- common/autotest_common.sh@10 -- # set +x 00:03:17.001 [2024-07-15 10:20:05.318063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:17.001 [2024-07-15 10:20:05.318158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074649 ] 00:03:17.001 EAL: No free 2048 kB hugepages reported on node 1 00:03:17.001 [2024-07-15 10:20:05.376708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:17.001 [2024-07-15 10:20:05.476611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:17.257 10:20:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:17.257 10:20:05 -- common/autotest_common.sh@862 -- # return 0 00:03:17.257 10:20:05 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:17.257 10:20:05 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:17.257 10:20:05 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:20.539 nvme0n1 00:03:20.539 10:20:08 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:20.539 [2024-07-15 10:20:09.014469] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:20.539 [2024-07-15 10:20:09.014508] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:20.539 request: 00:03:20.539 { 00:03:20.539 "nvme_ctrlr_name": "nvme0", 00:03:20.539 "password": "test", 00:03:20.539 "method": "bdev_nvme_opal_revert", 00:03:20.539 "req_id": 1 00:03:20.539 } 00:03:20.539 Got JSON-RPC error response 00:03:20.539 response: 00:03:20.539 { 00:03:20.539 "code": -32603, 00:03:20.539 "message": "Internal error" 00:03:20.539 } 00:03:20.539 10:20:09 -- common/autotest_common.sh@1604 -- # true 00:03:20.539 10:20:09 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:20.539 10:20:09 -- common/autotest_common.sh@1608 -- # killprocess 1074649 00:03:20.539 10:20:09 -- common/autotest_common.sh@948 -- # '[' -z 1074649 ']' 00:03:20.539 10:20:09 -- common/autotest_common.sh@952 -- # kill -0 1074649 00:03:20.539 10:20:09 -- common/autotest_common.sh@953 -- # uname 00:03:20.539 10:20:09 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:20.539 10:20:09 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1074649 00:03:20.539 10:20:09 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:20.539 10:20:09 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:20.539 10:20:09 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1074649' 00:03:20.539 killing process with pid 1074649 00:03:20.539 10:20:09 -- common/autotest_common.sh@967 -- # kill 1074649 00:03:20.539 10:20:09 -- common/autotest_common.sh@972 -- # wait 1074649 00:03:22.433 10:20:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:22.433 10:20:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:22.433 10:20:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:22.433 10:20:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:22.433 10:20:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:22.433 10:20:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.433 10:20:10 -- common/autotest_common.sh@10 -- # set +x 00:03:22.433 10:20:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:22.433 10:20:10 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:22.433 10:20:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.433 10:20:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.433 10:20:10 -- common/autotest_common.sh@10 -- # set +x 00:03:22.433 ************************************ 00:03:22.433 START TEST env 00:03:22.433 ************************************ 00:03:22.433 10:20:10 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:22.433 * Looking for test storage... 00:03:22.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:22.433 10:20:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:22.433 10:20:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.433 10:20:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.433 10:20:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.433 ************************************ 00:03:22.433 START TEST env_memory 00:03:22.433 ************************************ 00:03:22.433 10:20:10 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:22.433 00:03:22.433 00:03:22.433 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.433 http://cunit.sourceforge.net/ 00:03:22.433 00:03:22.433 00:03:22.433 Suite: memory 00:03:22.433 Test: alloc and free memory map ...[2024-07-15 10:20:10.899130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:22.433 passed 00:03:22.433 Test: mem map translation ...[2024-07-15 10:20:10.918814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:22.433 [2024-07-15 10:20:10.918836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:22.433 [2024-07-15 10:20:10.918877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:22.433 [2024-07-15 10:20:10.918889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:22.433 passed 00:03:22.433 Test: mem map registration ...[2024-07-15 10:20:10.959411] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:22.433 [2024-07-15 10:20:10.959430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:22.433 passed 00:03:22.717 Test: mem map adjacent registrations ...passed 00:03:22.717 00:03:22.717 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.717 suites 1 1 n/a 0 0 00:03:22.717 tests 4 4 4 0 0 00:03:22.717 asserts 152 152 152 0 n/a 00:03:22.717 00:03:22.717 Elapsed time = 0.140 seconds 00:03:22.717 00:03:22.717 real 0m0.148s 00:03:22.717 user 0m0.140s 00:03:22.717 sys 0m0.008s 00:03:22.717 10:20:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.717 10:20:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:22.717 ************************************ 00:03:22.717 END TEST env_memory 00:03:22.717 ************************************ 00:03:22.717 10:20:11 env -- common/autotest_common.sh@1142 -- # return 0 00:03:22.717 10:20:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:22.717 10:20:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.717 10:20:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.717 10:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.717 ************************************ 00:03:22.717 START TEST env_vtophys 00:03:22.717 ************************************ 00:03:22.717 10:20:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:22.717 EAL: lib.eal log level changed from notice to debug 00:03:22.717 EAL: Detected lcore 0 as core 0 on socket 0 00:03:22.717 EAL: Detected lcore 1 as core 1 on socket 0 00:03:22.717 EAL: Detected lcore 2 as core 2 on socket 0 00:03:22.717 EAL: Detected lcore 3 as core 3 on socket 0 00:03:22.717 EAL: Detected lcore 4 as core 4 on socket 0 00:03:22.717 EAL: Detected lcore 5 as core 5 on socket 0 00:03:22.717 EAL: Detected lcore 6 as core 8 on socket 0 00:03:22.717 EAL: Detected lcore 7 as core 9 on socket 0 00:03:22.717 EAL: Detected lcore 8 as core 10 on socket 0 00:03:22.717 EAL: Detected lcore 9 as core 11 on socket 0 00:03:22.717 EAL: Detected lcore 10 as core 12 on socket 0 00:03:22.717 EAL: Detected lcore 11 as core 13 on socket 0 00:03:22.717 EAL: Detected lcore 12 as core 0 on socket 1 00:03:22.717 EAL: Detected lcore 13 as core 1 on socket 1 00:03:22.717 EAL: Detected lcore 14 as core 2 on socket 1 00:03:22.717 EAL: Detected lcore 15 as core 3 on socket 1 00:03:22.718 EAL: Detected lcore 16 as core 4 on socket 1 00:03:22.718 EAL: Detected lcore 17 as core 5 on socket 1 00:03:22.718 EAL: Detected lcore 18 as core 8 on socket 1 00:03:22.718 EAL: Detected lcore 19 as core 9 on socket 1 00:03:22.718 EAL: Detected lcore 20 as core 10 on socket 1 00:03:22.718 EAL: Detected lcore 21 as core 11 on socket 1 00:03:22.718 EAL: Detected lcore 22 as core 12 on socket 1 00:03:22.718 EAL: Detected lcore 23 as core 13 on socket 1 00:03:22.718 EAL: Detected lcore 24 as core 0 on socket 0 00:03:22.718 EAL: Detected lcore 25 as core 1 on socket 0 00:03:22.718 EAL: Detected lcore 26 as core 2 on socket 0 00:03:22.718 EAL: Detected lcore 27 as core 3 on socket 0 00:03:22.718 EAL: Detected lcore 28 as core 4 on socket 0 00:03:22.718 EAL: Detected lcore 29 as core 5 on socket 0 00:03:22.718 EAL: Detected lcore 30 as core 8 on socket 0 00:03:22.718 EAL: Detected lcore 31 as core 9 on socket 0 00:03:22.718 EAL: Detected lcore 32 as core 10 on socket 0 00:03:22.718 EAL: Detected lcore 33 as core 11 on socket 0 00:03:22.718 EAL: Detected lcore 34 as core 12 on socket 0 00:03:22.718 EAL: Detected lcore 35 as core 13 on socket 0 00:03:22.718 EAL: Detected lcore 36 as core 0 on socket 1 00:03:22.718 EAL: Detected lcore 37 as core 1 on socket 1 00:03:22.718 EAL: Detected lcore 38 as core 2 on socket 1 00:03:22.718 EAL: Detected lcore 39 as core 3 on socket 1 00:03:22.718 EAL: Detected lcore 40 as core 4 on socket 1 00:03:22.718 EAL: Detected lcore 41 as core 5 on socket 1 00:03:22.718 EAL: Detected lcore 42 as core 8 on socket 1 00:03:22.718 EAL: Detected lcore 43 as core 9 on socket 1 00:03:22.718 EAL: Detected lcore 44 as core 10 on socket 1 00:03:22.718 EAL: Detected lcore 45 as core 11 on socket 1 00:03:22.718 EAL: Detected lcore 46 as core 12 on socket 1 00:03:22.718 EAL: Detected lcore 47 as core 13 on socket 1 00:03:22.718 EAL: Maximum logical cores by configuration: 128 00:03:22.718 EAL: Detected CPU lcores: 48 00:03:22.718 EAL: Detected NUMA nodes: 2 00:03:22.718 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:22.718 EAL: Detected shared linkage of DPDK 00:03:22.718 EAL: No shared files mode enabled, IPC will be disabled 00:03:22.718 EAL: Bus pci wants IOVA as 'DC' 00:03:22.718 EAL: Buses did not request a specific IOVA mode. 00:03:22.718 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:22.718 EAL: Selected IOVA mode 'VA' 00:03:22.718 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.718 EAL: Probing VFIO support... 00:03:22.718 EAL: IOMMU type 1 (Type 1) is supported 00:03:22.718 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:22.718 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:22.718 EAL: VFIO support initialized 00:03:22.718 EAL: Ask a virtual area of 0x2e000 bytes 00:03:22.718 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:22.718 EAL: Setting up physically contiguous memory... 00:03:22.718 EAL: Setting maximum number of open files to 524288 00:03:22.718 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:22.718 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:22.718 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:22.718 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:22.718 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.718 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:22.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.718 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.718 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:22.718 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:22.718 EAL: Hugepages will be freed exactly as allocated. 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: TSC frequency is ~2700000 KHz 00:03:22.718 EAL: Main lcore 0 is ready (tid=7f723710ea00;cpuset=[0]) 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 0 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 2MB 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:22.718 EAL: Mem event callback 'spdk:(nil)' registered 00:03:22.718 00:03:22.718 00:03:22.718 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.718 http://cunit.sourceforge.net/ 00:03:22.718 00:03:22.718 00:03:22.718 Suite: components_suite 00:03:22.718 Test: vtophys_malloc_test ...passed 00:03:22.718 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 4MB 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was shrunk by 4MB 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 6MB 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was shrunk by 6MB 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 10MB 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was shrunk by 10MB 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 18MB 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was shrunk by 18MB 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 34MB 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was shrunk by 34MB 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was expanded by 66MB 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.718 EAL: Heap on socket 0 was shrunk by 66MB 00:03:22.718 EAL: Trying to obtain current memory policy. 00:03:22.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.718 EAL: Restoring previous memory policy: 4 00:03:22.718 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.718 EAL: request: mp_malloc_sync 00:03:22.718 EAL: No shared files mode enabled, IPC is disabled 00:03:22.719 EAL: Heap on socket 0 was expanded by 130MB 00:03:22.996 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.996 EAL: request: mp_malloc_sync 00:03:22.996 EAL: No shared files mode enabled, IPC is disabled 00:03:22.996 EAL: Heap on socket 0 was shrunk by 130MB 00:03:22.996 EAL: Trying to obtain current memory policy. 00:03:22.996 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.996 EAL: Restoring previous memory policy: 4 00:03:22.996 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.996 EAL: request: mp_malloc_sync 00:03:22.996 EAL: No shared files mode enabled, IPC is disabled 00:03:22.996 EAL: Heap on socket 0 was expanded by 258MB 00:03:22.996 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.996 EAL: request: mp_malloc_sync 00:03:22.996 EAL: No shared files mode enabled, IPC is disabled 00:03:22.996 EAL: Heap on socket 0 was shrunk by 258MB 00:03:22.996 EAL: Trying to obtain current memory policy. 00:03:22.996 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.253 EAL: Restoring previous memory policy: 4 00:03:23.253 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.253 EAL: request: mp_malloc_sync 00:03:23.253 EAL: No shared files mode enabled, IPC is disabled 00:03:23.253 EAL: Heap on socket 0 was expanded by 514MB 00:03:23.253 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.253 EAL: request: mp_malloc_sync 00:03:23.253 EAL: No shared files mode enabled, IPC is disabled 00:03:23.253 EAL: Heap on socket 0 was shrunk by 514MB 00:03:23.253 EAL: Trying to obtain current memory policy. 00:03:23.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.510 EAL: Restoring previous memory policy: 4 00:03:23.510 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.510 EAL: request: mp_malloc_sync 00:03:23.510 EAL: No shared files mode enabled, IPC is disabled 00:03:23.510 EAL: Heap on socket 0 was expanded by 1026MB 00:03:23.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.023 EAL: request: mp_malloc_sync 00:03:24.023 EAL: No shared files mode enabled, IPC is disabled 00:03:24.023 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:24.023 passed 00:03:24.023 00:03:24.023 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.023 suites 1 1 n/a 0 0 00:03:24.023 tests 2 2 2 0 0 00:03:24.023 asserts 497 497 497 0 n/a 00:03:24.023 00:03:24.023 Elapsed time = 1.285 seconds 00:03:24.023 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.023 EAL: request: mp_malloc_sync 00:03:24.023 EAL: No shared files mode enabled, IPC is disabled 00:03:24.023 EAL: Heap on socket 0 was shrunk by 2MB 00:03:24.023 EAL: No shared files mode enabled, IPC is disabled 00:03:24.023 EAL: No shared files mode enabled, IPC is disabled 00:03:24.023 EAL: No shared files mode enabled, IPC is disabled 00:03:24.023 00:03:24.023 real 0m1.389s 00:03:24.023 user 0m0.821s 00:03:24.023 sys 0m0.541s 00:03:24.023 10:20:12 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.023 10:20:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:24.023 ************************************ 00:03:24.023 END TEST env_vtophys 00:03:24.023 ************************************ 00:03:24.023 10:20:12 env -- common/autotest_common.sh@1142 -- # return 0 00:03:24.023 10:20:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:24.023 10:20:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.023 10:20:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.023 10:20:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.023 ************************************ 00:03:24.023 START TEST env_pci 00:03:24.023 ************************************ 00:03:24.023 10:20:12 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:24.023 00:03:24.023 00:03:24.023 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.023 http://cunit.sourceforge.net/ 00:03:24.023 00:03:24.023 00:03:24.023 Suite: pci 00:03:24.023 Test: pci_hook ...[2024-07-15 10:20:12.509450] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1075540 has claimed it 00:03:24.023 EAL: Cannot find device (10000:00:01.0) 00:03:24.024 EAL: Failed to attach device on primary process 00:03:24.024 passed 00:03:24.024 00:03:24.024 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.024 suites 1 1 n/a 0 0 00:03:24.024 tests 1 1 1 0 0 00:03:24.024 asserts 25 25 25 0 n/a 00:03:24.024 00:03:24.024 Elapsed time = 0.021 seconds 00:03:24.024 00:03:24.024 real 0m0.034s 00:03:24.024 user 0m0.010s 00:03:24.024 sys 0m0.024s 00:03:24.024 10:20:12 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.024 10:20:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:24.024 ************************************ 00:03:24.024 END TEST env_pci 00:03:24.024 ************************************ 00:03:24.024 10:20:12 env -- common/autotest_common.sh@1142 -- # return 0 00:03:24.024 10:20:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:24.024 10:20:12 env -- env/env.sh@15 -- # uname 00:03:24.024 10:20:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:24.024 10:20:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:24.024 10:20:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:24.024 10:20:12 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:24.024 10:20:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.024 10:20:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.283 ************************************ 00:03:24.283 START TEST env_dpdk_post_init 00:03:24.283 ************************************ 00:03:24.283 10:20:12 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:24.283 EAL: Detected CPU lcores: 48 00:03:24.283 EAL: Detected NUMA nodes: 2 00:03:24.283 EAL: Detected shared linkage of DPDK 00:03:24.283 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:24.283 EAL: Selected IOVA mode 'VA' 00:03:24.283 EAL: No free 2048 kB hugepages reported on node 1 00:03:24.283 EAL: VFIO support initialized 00:03:24.283 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:24.283 EAL: Using IOMMU type 1 (Type 1) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:24.283 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:25.218 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:25.218 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:28.492 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:28.492 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:28.492 Starting DPDK initialization... 00:03:28.492 Starting SPDK post initialization... 00:03:28.492 SPDK NVMe probe 00:03:28.492 Attaching to 0000:0b:00.0 00:03:28.492 Attached to 0000:0b:00.0 00:03:28.492 Cleaning up... 00:03:28.492 00:03:28.492 real 0m4.344s 00:03:28.492 user 0m3.218s 00:03:28.492 sys 0m0.185s 00:03:28.492 10:20:16 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.492 10:20:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:28.492 ************************************ 00:03:28.492 END TEST env_dpdk_post_init 00:03:28.492 ************************************ 00:03:28.492 10:20:16 env -- common/autotest_common.sh@1142 -- # return 0 00:03:28.492 10:20:16 env -- env/env.sh@26 -- # uname 00:03:28.492 10:20:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:28.492 10:20:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:28.492 10:20:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.492 10:20:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.492 10:20:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.492 ************************************ 00:03:28.492 START TEST env_mem_callbacks 00:03:28.492 ************************************ 00:03:28.492 10:20:16 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:28.492 EAL: Detected CPU lcores: 48 00:03:28.492 EAL: Detected NUMA nodes: 2 00:03:28.492 EAL: Detected shared linkage of DPDK 00:03:28.492 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:28.492 EAL: Selected IOVA mode 'VA' 00:03:28.492 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.492 EAL: VFIO support initialized 00:03:28.492 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:28.492 00:03:28.492 00:03:28.492 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.492 http://cunit.sourceforge.net/ 00:03:28.492 00:03:28.492 00:03:28.492 Suite: memory 00:03:28.492 Test: test ... 00:03:28.492 register 0x200000200000 2097152 00:03:28.492 malloc 3145728 00:03:28.492 register 0x200000400000 4194304 00:03:28.492 buf 0x200000500000 len 3145728 PASSED 00:03:28.492 malloc 64 00:03:28.492 buf 0x2000004fff40 len 64 PASSED 00:03:28.492 malloc 4194304 00:03:28.492 register 0x200000800000 6291456 00:03:28.492 buf 0x200000a00000 len 4194304 PASSED 00:03:28.492 free 0x200000500000 3145728 00:03:28.492 free 0x2000004fff40 64 00:03:28.492 unregister 0x200000400000 4194304 PASSED 00:03:28.492 free 0x200000a00000 4194304 00:03:28.492 unregister 0x200000800000 6291456 PASSED 00:03:28.492 malloc 8388608 00:03:28.492 register 0x200000400000 10485760 00:03:28.492 buf 0x200000600000 len 8388608 PASSED 00:03:28.492 free 0x200000600000 8388608 00:03:28.492 unregister 0x200000400000 10485760 PASSED 00:03:28.492 passed 00:03:28.492 00:03:28.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.492 suites 1 1 n/a 0 0 00:03:28.492 tests 1 1 1 0 0 00:03:28.492 asserts 15 15 15 0 n/a 00:03:28.492 00:03:28.492 Elapsed time = 0.005 seconds 00:03:28.492 00:03:28.492 real 0m0.049s 00:03:28.492 user 0m0.013s 00:03:28.492 sys 0m0.036s 00:03:28.492 10:20:17 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.492 10:20:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:28.492 ************************************ 00:03:28.492 END TEST env_mem_callbacks 00:03:28.492 ************************************ 00:03:28.751 10:20:17 env -- common/autotest_common.sh@1142 -- # return 0 00:03:28.751 00:03:28.751 real 0m6.257s 00:03:28.751 user 0m4.319s 00:03:28.751 sys 0m0.989s 00:03:28.751 10:20:17 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.751 10:20:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.751 ************************************ 00:03:28.751 END TEST env 00:03:28.751 ************************************ 00:03:28.751 10:20:17 -- common/autotest_common.sh@1142 -- # return 0 00:03:28.751 10:20:17 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.751 10:20:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.751 10:20:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.751 10:20:17 -- common/autotest_common.sh@10 -- # set +x 00:03:28.751 ************************************ 00:03:28.751 START TEST rpc 00:03:28.751 ************************************ 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.751 * Looking for test storage... 00:03:28.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.751 10:20:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1076189 00:03:28.751 10:20:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:28.751 10:20:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.751 10:20:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1076189 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@829 -- # '[' -z 1076189 ']' 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:28.751 10:20:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.751 [2024-07-15 10:20:17.200010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:28.751 [2024-07-15 10:20:17.200093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076189 ] 00:03:28.751 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.751 [2024-07-15 10:20:17.257239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.009 [2024-07-15 10:20:17.365103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:29.009 [2024-07-15 10:20:17.365156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1076189' to capture a snapshot of events at runtime. 00:03:29.009 [2024-07-15 10:20:17.365185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:29.009 [2024-07-15 10:20:17.365196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:29.009 [2024-07-15 10:20:17.365206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1076189 for offline analysis/debug. 00:03:29.009 [2024-07-15 10:20:17.365233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.267 10:20:17 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:29.267 10:20:17 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:29.267 10:20:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:29.267 10:20:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:29.267 10:20:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:29.267 10:20:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:29.267 10:20:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.267 10:20:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.267 10:20:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.267 ************************************ 00:03:29.267 START TEST rpc_integrity 00:03:29.267 ************************************ 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:29.267 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.267 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.268 { 00:03:29.268 "name": "Malloc0", 00:03:29.268 "aliases": [ 00:03:29.268 "a4eb11af-a071-46bf-9419-8205d539f99c" 00:03:29.268 ], 00:03:29.268 "product_name": "Malloc disk", 00:03:29.268 "block_size": 512, 00:03:29.268 "num_blocks": 16384, 00:03:29.268 "uuid": "a4eb11af-a071-46bf-9419-8205d539f99c", 00:03:29.268 "assigned_rate_limits": { 00:03:29.268 "rw_ios_per_sec": 0, 00:03:29.268 "rw_mbytes_per_sec": 0, 00:03:29.268 "r_mbytes_per_sec": 0, 00:03:29.268 "w_mbytes_per_sec": 0 00:03:29.268 }, 00:03:29.268 "claimed": false, 00:03:29.268 "zoned": false, 00:03:29.268 "supported_io_types": { 00:03:29.268 "read": true, 00:03:29.268 "write": true, 00:03:29.268 "unmap": true, 00:03:29.268 "flush": true, 00:03:29.268 "reset": true, 00:03:29.268 "nvme_admin": false, 00:03:29.268 "nvme_io": false, 00:03:29.268 "nvme_io_md": false, 00:03:29.268 "write_zeroes": true, 00:03:29.268 "zcopy": true, 00:03:29.268 "get_zone_info": false, 00:03:29.268 "zone_management": false, 00:03:29.268 "zone_append": false, 00:03:29.268 "compare": false, 00:03:29.268 "compare_and_write": false, 00:03:29.268 "abort": true, 00:03:29.268 "seek_hole": false, 00:03:29.268 "seek_data": false, 00:03:29.268 "copy": true, 00:03:29.268 "nvme_iov_md": false 00:03:29.268 }, 00:03:29.268 "memory_domains": [ 00:03:29.268 { 00:03:29.268 "dma_device_id": "system", 00:03:29.268 "dma_device_type": 1 00:03:29.268 }, 00:03:29.268 { 00:03:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.268 "dma_device_type": 2 00:03:29.268 } 00:03:29.268 ], 00:03:29.268 "driver_specific": {} 00:03:29.268 } 00:03:29.268 ]' 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.268 [2024-07-15 10:20:17.722022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:29.268 [2024-07-15 10:20:17.722061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.268 [2024-07-15 10:20:17.722098] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dced50 00:03:29.268 [2024-07-15 10:20:17.722112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.268 [2024-07-15 10:20:17.723395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.268 [2024-07-15 10:20:17.723416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.268 Passthru0 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.268 { 00:03:29.268 "name": "Malloc0", 00:03:29.268 "aliases": [ 00:03:29.268 "a4eb11af-a071-46bf-9419-8205d539f99c" 00:03:29.268 ], 00:03:29.268 "product_name": "Malloc disk", 00:03:29.268 "block_size": 512, 00:03:29.268 "num_blocks": 16384, 00:03:29.268 "uuid": "a4eb11af-a071-46bf-9419-8205d539f99c", 00:03:29.268 "assigned_rate_limits": { 00:03:29.268 "rw_ios_per_sec": 0, 00:03:29.268 "rw_mbytes_per_sec": 0, 00:03:29.268 "r_mbytes_per_sec": 0, 00:03:29.268 "w_mbytes_per_sec": 0 00:03:29.268 }, 00:03:29.268 "claimed": true, 00:03:29.268 "claim_type": "exclusive_write", 00:03:29.268 "zoned": false, 00:03:29.268 "supported_io_types": { 00:03:29.268 "read": true, 00:03:29.268 "write": true, 00:03:29.268 "unmap": true, 00:03:29.268 "flush": true, 00:03:29.268 "reset": true, 00:03:29.268 "nvme_admin": false, 00:03:29.268 "nvme_io": false, 00:03:29.268 "nvme_io_md": false, 00:03:29.268 "write_zeroes": true, 00:03:29.268 "zcopy": true, 00:03:29.268 "get_zone_info": false, 00:03:29.268 "zone_management": false, 00:03:29.268 "zone_append": false, 00:03:29.268 "compare": false, 00:03:29.268 "compare_and_write": false, 00:03:29.268 "abort": true, 00:03:29.268 "seek_hole": false, 00:03:29.268 "seek_data": false, 00:03:29.268 "copy": true, 00:03:29.268 "nvme_iov_md": false 00:03:29.268 }, 00:03:29.268 "memory_domains": [ 00:03:29.268 { 00:03:29.268 "dma_device_id": "system", 00:03:29.268 "dma_device_type": 1 00:03:29.268 }, 00:03:29.268 { 00:03:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.268 "dma_device_type": 2 00:03:29.268 } 00:03:29.268 ], 00:03:29.268 "driver_specific": {} 00:03:29.268 }, 00:03:29.268 { 00:03:29.268 "name": "Passthru0", 00:03:29.268 "aliases": [ 00:03:29.268 "e865cb5f-1ffc-594b-9c8a-f5a920cb2993" 00:03:29.268 ], 00:03:29.268 "product_name": "passthru", 00:03:29.268 "block_size": 512, 00:03:29.268 "num_blocks": 16384, 00:03:29.268 "uuid": "e865cb5f-1ffc-594b-9c8a-f5a920cb2993", 00:03:29.268 "assigned_rate_limits": { 00:03:29.268 "rw_ios_per_sec": 0, 00:03:29.268 "rw_mbytes_per_sec": 0, 00:03:29.268 "r_mbytes_per_sec": 0, 00:03:29.268 "w_mbytes_per_sec": 0 00:03:29.268 }, 00:03:29.268 "claimed": false, 00:03:29.268 "zoned": false, 00:03:29.268 "supported_io_types": { 00:03:29.268 "read": true, 00:03:29.268 "write": true, 00:03:29.268 "unmap": true, 00:03:29.268 "flush": true, 00:03:29.268 "reset": true, 00:03:29.268 "nvme_admin": false, 00:03:29.268 "nvme_io": false, 00:03:29.268 "nvme_io_md": false, 00:03:29.268 "write_zeroes": true, 00:03:29.268 "zcopy": true, 00:03:29.268 "get_zone_info": false, 00:03:29.268 "zone_management": false, 00:03:29.268 "zone_append": false, 00:03:29.268 "compare": false, 00:03:29.268 "compare_and_write": false, 00:03:29.268 "abort": true, 00:03:29.268 "seek_hole": false, 00:03:29.268 "seek_data": false, 00:03:29.268 "copy": true, 00:03:29.268 "nvme_iov_md": false 00:03:29.268 }, 00:03:29.268 "memory_domains": [ 00:03:29.268 { 00:03:29.268 "dma_device_id": "system", 00:03:29.268 "dma_device_type": 1 00:03:29.268 }, 00:03:29.268 { 00:03:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.268 "dma_device_type": 2 00:03:29.268 } 00:03:29.268 ], 00:03:29.268 "driver_specific": { 00:03:29.268 "passthru": { 00:03:29.268 "name": "Passthru0", 00:03:29.268 "base_bdev_name": "Malloc0" 00:03:29.268 } 00:03:29.268 } 00:03:29.268 } 00:03:29.268 ]' 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.268 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.268 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:29.526 10:20:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.526 00:03:29.526 real 0m0.211s 00:03:29.526 user 0m0.136s 00:03:29.526 sys 0m0.019s 00:03:29.526 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.526 10:20:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 ************************************ 00:03:29.526 END TEST rpc_integrity 00:03:29.526 ************************************ 00:03:29.526 10:20:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.526 10:20:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:29.526 10:20:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.526 10:20:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.526 10:20:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 ************************************ 00:03:29.526 START TEST rpc_plugins 00:03:29.526 ************************************ 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:29.526 { 00:03:29.526 "name": "Malloc1", 00:03:29.526 "aliases": [ 00:03:29.526 "f736b381-c7c5-476e-83ce-8a6f31d0f7ce" 00:03:29.526 ], 00:03:29.526 "product_name": "Malloc disk", 00:03:29.526 "block_size": 4096, 00:03:29.526 "num_blocks": 256, 00:03:29.526 "uuid": "f736b381-c7c5-476e-83ce-8a6f31d0f7ce", 00:03:29.526 "assigned_rate_limits": { 00:03:29.526 "rw_ios_per_sec": 0, 00:03:29.526 "rw_mbytes_per_sec": 0, 00:03:29.526 "r_mbytes_per_sec": 0, 00:03:29.526 "w_mbytes_per_sec": 0 00:03:29.526 }, 00:03:29.526 "claimed": false, 00:03:29.526 "zoned": false, 00:03:29.526 "supported_io_types": { 00:03:29.526 "read": true, 00:03:29.526 "write": true, 00:03:29.526 "unmap": true, 00:03:29.526 "flush": true, 00:03:29.526 "reset": true, 00:03:29.526 "nvme_admin": false, 00:03:29.526 "nvme_io": false, 00:03:29.526 "nvme_io_md": false, 00:03:29.526 "write_zeroes": true, 00:03:29.526 "zcopy": true, 00:03:29.526 "get_zone_info": false, 00:03:29.526 "zone_management": false, 00:03:29.526 "zone_append": false, 00:03:29.526 "compare": false, 00:03:29.526 "compare_and_write": false, 00:03:29.526 "abort": true, 00:03:29.526 "seek_hole": false, 00:03:29.526 "seek_data": false, 00:03:29.526 "copy": true, 00:03:29.526 "nvme_iov_md": false 00:03:29.526 }, 00:03:29.526 "memory_domains": [ 00:03:29.526 { 00:03:29.526 "dma_device_id": "system", 00:03:29.526 "dma_device_type": 1 00:03:29.526 }, 00:03:29.526 { 00:03:29.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.526 "dma_device_type": 2 00:03:29.526 } 00:03:29.526 ], 00:03:29.526 "driver_specific": {} 00:03:29.526 } 00:03:29.526 ]' 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:29.526 10:20:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:29.526 00:03:29.526 real 0m0.108s 00:03:29.526 user 0m0.061s 00:03:29.526 sys 0m0.015s 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.526 10:20:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 ************************************ 00:03:29.526 END TEST rpc_plugins 00:03:29.526 ************************************ 00:03:29.526 10:20:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.526 10:20:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:29.526 10:20:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.526 10:20:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.526 10:20:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 ************************************ 00:03:29.526 START TEST rpc_trace_cmd_test 00:03:29.526 ************************************ 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:29.526 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1076189", 00:03:29.526 "tpoint_group_mask": "0x8", 00:03:29.526 "iscsi_conn": { 00:03:29.526 "mask": "0x2", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "scsi": { 00:03:29.526 "mask": "0x4", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "bdev": { 00:03:29.526 "mask": "0x8", 00:03:29.526 "tpoint_mask": "0xffffffffffffffff" 00:03:29.526 }, 00:03:29.526 "nvmf_rdma": { 00:03:29.526 "mask": "0x10", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "nvmf_tcp": { 00:03:29.526 "mask": "0x20", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "ftl": { 00:03:29.526 "mask": "0x40", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "blobfs": { 00:03:29.526 "mask": "0x80", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "dsa": { 00:03:29.526 "mask": "0x200", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "thread": { 00:03:29.526 "mask": "0x400", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "nvme_pcie": { 00:03:29.526 "mask": "0x800", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "iaa": { 00:03:29.526 "mask": "0x1000", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "nvme_tcp": { 00:03:29.526 "mask": "0x2000", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "bdev_nvme": { 00:03:29.526 "mask": "0x4000", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 }, 00:03:29.526 "sock": { 00:03:29.526 "mask": "0x8000", 00:03:29.526 "tpoint_mask": "0x0" 00:03:29.526 } 00:03:29.526 }' 00:03:29.526 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:29.786 00:03:29.786 real 0m0.177s 00:03:29.786 user 0m0.152s 00:03:29.786 sys 0m0.017s 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.786 10:20:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:29.786 ************************************ 00:03:29.786 END TEST rpc_trace_cmd_test 00:03:29.786 ************************************ 00:03:29.786 10:20:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.786 10:20:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:29.786 10:20:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:29.786 10:20:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:29.786 10:20:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.786 10:20:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.786 10:20:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.786 ************************************ 00:03:29.786 START TEST rpc_daemon_integrity 00:03:29.786 ************************************ 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.786 { 00:03:29.786 "name": "Malloc2", 00:03:29.786 "aliases": [ 00:03:29.786 "4543324f-c97a-4f42-b0b7-b24fc9155237" 00:03:29.786 ], 00:03:29.786 "product_name": "Malloc disk", 00:03:29.786 "block_size": 512, 00:03:29.786 "num_blocks": 16384, 00:03:29.786 "uuid": "4543324f-c97a-4f42-b0b7-b24fc9155237", 00:03:29.786 "assigned_rate_limits": { 00:03:29.786 "rw_ios_per_sec": 0, 00:03:29.786 "rw_mbytes_per_sec": 0, 00:03:29.786 "r_mbytes_per_sec": 0, 00:03:29.786 "w_mbytes_per_sec": 0 00:03:29.786 }, 00:03:29.786 "claimed": false, 00:03:29.786 "zoned": false, 00:03:29.786 "supported_io_types": { 00:03:29.786 "read": true, 00:03:29.786 "write": true, 00:03:29.786 "unmap": true, 00:03:29.786 "flush": true, 00:03:29.786 "reset": true, 00:03:29.786 "nvme_admin": false, 00:03:29.786 "nvme_io": false, 00:03:29.786 "nvme_io_md": false, 00:03:29.786 "write_zeroes": true, 00:03:29.786 "zcopy": true, 00:03:29.786 "get_zone_info": false, 00:03:29.786 "zone_management": false, 00:03:29.786 "zone_append": false, 00:03:29.786 "compare": false, 00:03:29.786 "compare_and_write": false, 00:03:29.786 "abort": true, 00:03:29.786 "seek_hole": false, 00:03:29.786 "seek_data": false, 00:03:29.786 "copy": true, 00:03:29.786 "nvme_iov_md": false 00:03:29.786 }, 00:03:29.786 "memory_domains": [ 00:03:29.786 { 00:03:29.786 "dma_device_id": "system", 00:03:29.786 "dma_device_type": 1 00:03:29.786 }, 00:03:29.786 { 00:03:29.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.786 "dma_device_type": 2 00:03:29.786 } 00:03:29.786 ], 00:03:29.786 "driver_specific": {} 00:03:29.786 } 00:03:29.786 ]' 00:03:29.786 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.043 [2024-07-15 10:20:18.355953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:30.043 [2024-07-15 10:20:18.355991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:30.043 [2024-07-15 10:20:18.356013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dcfc00 00:03:30.043 [2024-07-15 10:20:18.356028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:30.043 [2024-07-15 10:20:18.357212] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:30.043 [2024-07-15 10:20:18.357234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:30.043 Passthru0 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:30.043 { 00:03:30.043 "name": "Malloc2", 00:03:30.043 "aliases": [ 00:03:30.043 "4543324f-c97a-4f42-b0b7-b24fc9155237" 00:03:30.043 ], 00:03:30.043 "product_name": "Malloc disk", 00:03:30.043 "block_size": 512, 00:03:30.043 "num_blocks": 16384, 00:03:30.043 "uuid": "4543324f-c97a-4f42-b0b7-b24fc9155237", 00:03:30.043 "assigned_rate_limits": { 00:03:30.043 "rw_ios_per_sec": 0, 00:03:30.043 "rw_mbytes_per_sec": 0, 00:03:30.043 "r_mbytes_per_sec": 0, 00:03:30.043 "w_mbytes_per_sec": 0 00:03:30.043 }, 00:03:30.043 "claimed": true, 00:03:30.043 "claim_type": "exclusive_write", 00:03:30.043 "zoned": false, 00:03:30.043 "supported_io_types": { 00:03:30.043 "read": true, 00:03:30.043 "write": true, 00:03:30.043 "unmap": true, 00:03:30.043 "flush": true, 00:03:30.043 "reset": true, 00:03:30.043 "nvme_admin": false, 00:03:30.043 "nvme_io": false, 00:03:30.043 "nvme_io_md": false, 00:03:30.043 "write_zeroes": true, 00:03:30.043 "zcopy": true, 00:03:30.043 "get_zone_info": false, 00:03:30.043 "zone_management": false, 00:03:30.043 "zone_append": false, 00:03:30.043 "compare": false, 00:03:30.043 "compare_and_write": false, 00:03:30.043 "abort": true, 00:03:30.043 "seek_hole": false, 00:03:30.043 "seek_data": false, 00:03:30.043 "copy": true, 00:03:30.043 "nvme_iov_md": false 00:03:30.043 }, 00:03:30.043 "memory_domains": [ 00:03:30.043 { 00:03:30.043 "dma_device_id": "system", 00:03:30.043 "dma_device_type": 1 00:03:30.043 }, 00:03:30.043 { 00:03:30.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.043 "dma_device_type": 2 00:03:30.043 } 00:03:30.043 ], 00:03:30.043 "driver_specific": {} 00:03:30.043 }, 00:03:30.043 { 00:03:30.043 "name": "Passthru0", 00:03:30.043 "aliases": [ 00:03:30.043 "40ce4143-333a-56a6-ab95-93292b5701fb" 00:03:30.043 ], 00:03:30.043 "product_name": "passthru", 00:03:30.043 "block_size": 512, 00:03:30.043 "num_blocks": 16384, 00:03:30.043 "uuid": "40ce4143-333a-56a6-ab95-93292b5701fb", 00:03:30.043 "assigned_rate_limits": { 00:03:30.043 "rw_ios_per_sec": 0, 00:03:30.043 "rw_mbytes_per_sec": 0, 00:03:30.043 "r_mbytes_per_sec": 0, 00:03:30.043 "w_mbytes_per_sec": 0 00:03:30.043 }, 00:03:30.043 "claimed": false, 00:03:30.043 "zoned": false, 00:03:30.043 "supported_io_types": { 00:03:30.043 "read": true, 00:03:30.043 "write": true, 00:03:30.043 "unmap": true, 00:03:30.043 "flush": true, 00:03:30.043 "reset": true, 00:03:30.043 "nvme_admin": false, 00:03:30.043 "nvme_io": false, 00:03:30.043 "nvme_io_md": false, 00:03:30.043 "write_zeroes": true, 00:03:30.043 "zcopy": true, 00:03:30.043 "get_zone_info": false, 00:03:30.043 "zone_management": false, 00:03:30.043 "zone_append": false, 00:03:30.043 "compare": false, 00:03:30.043 "compare_and_write": false, 00:03:30.043 "abort": true, 00:03:30.043 "seek_hole": false, 00:03:30.043 "seek_data": false, 00:03:30.043 "copy": true, 00:03:30.043 "nvme_iov_md": false 00:03:30.043 }, 00:03:30.043 "memory_domains": [ 00:03:30.043 { 00:03:30.043 "dma_device_id": "system", 00:03:30.043 "dma_device_type": 1 00:03:30.043 }, 00:03:30.043 { 00:03:30.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.043 "dma_device_type": 2 00:03:30.043 } 00:03:30.043 ], 00:03:30.043 "driver_specific": { 00:03:30.043 "passthru": { 00:03:30.043 "name": "Passthru0", 00:03:30.043 "base_bdev_name": "Malloc2" 00:03:30.043 } 00:03:30.043 } 00:03:30.043 } 00:03:30.043 ]' 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.043 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:30.044 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:30.044 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:30.044 10:20:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:30.044 00:03:30.044 real 0m0.208s 00:03:30.044 user 0m0.136s 00:03:30.044 sys 0m0.018s 00:03:30.044 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.044 10:20:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.044 ************************************ 00:03:30.044 END TEST rpc_daemon_integrity 00:03:30.044 ************************************ 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:30.044 10:20:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:30.044 10:20:18 rpc -- rpc/rpc.sh@84 -- # killprocess 1076189 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@948 -- # '[' -z 1076189 ']' 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@952 -- # kill -0 1076189 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@953 -- # uname 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1076189 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1076189' 00:03:30.044 killing process with pid 1076189 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@967 -- # kill 1076189 00:03:30.044 10:20:18 rpc -- common/autotest_common.sh@972 -- # wait 1076189 00:03:30.607 00:03:30.607 real 0m1.829s 00:03:30.607 user 0m2.286s 00:03:30.607 sys 0m0.537s 00:03:30.607 10:20:18 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.607 10:20:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.607 ************************************ 00:03:30.607 END TEST rpc 00:03:30.607 ************************************ 00:03:30.607 10:20:18 -- common/autotest_common.sh@1142 -- # return 0 00:03:30.607 10:20:18 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:30.607 10:20:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.607 10:20:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.607 10:20:18 -- common/autotest_common.sh@10 -- # set +x 00:03:30.607 ************************************ 00:03:30.607 START TEST skip_rpc 00:03:30.607 ************************************ 00:03:30.607 10:20:18 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:30.607 * Looking for test storage... 00:03:30.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:30.607 10:20:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:30.607 10:20:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:30.607 10:20:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:30.607 10:20:19 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.607 10:20:19 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.607 10:20:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.607 ************************************ 00:03:30.607 START TEST skip_rpc 00:03:30.607 ************************************ 00:03:30.607 10:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:30.607 10:20:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1076628 00:03:30.607 10:20:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:30.607 10:20:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.607 10:20:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:30.607 [2024-07-15 10:20:19.102216] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:30.607 [2024-07-15 10:20:19.102292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076628 ] 00:03:30.607 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.607 [2024-07-15 10:20:19.155615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.865 [2024-07-15 10:20:19.257177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1076628 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1076628 ']' 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1076628 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1076628 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1076628' 00:03:36.124 killing process with pid 1076628 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1076628 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1076628 00:03:36.124 00:03:36.124 real 0m5.446s 00:03:36.124 user 0m5.152s 00:03:36.124 sys 0m0.298s 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.124 10:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.124 ************************************ 00:03:36.124 END TEST skip_rpc 00:03:36.124 ************************************ 00:03:36.124 10:20:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:36.124 10:20:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:36.124 10:20:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.124 10:20:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.124 10:20:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.124 ************************************ 00:03:36.124 START TEST skip_rpc_with_json 00:03:36.124 ************************************ 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1077335 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1077335 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1077335 ']' 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:36.124 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.124 [2024-07-15 10:20:24.597755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:36.124 [2024-07-15 10:20:24.597872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077335 ] 00:03:36.124 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.124 [2024-07-15 10:20:24.654237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.382 [2024-07-15 10:20:24.765668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.640 [2024-07-15 10:20:25.005545] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:36.640 request: 00:03:36.640 { 00:03:36.640 "trtype": "tcp", 00:03:36.640 "method": "nvmf_get_transports", 00:03:36.640 "req_id": 1 00:03:36.640 } 00:03:36.640 Got JSON-RPC error response 00:03:36.640 response: 00:03:36.640 { 00:03:36.640 "code": -19, 00:03:36.640 "message": "No such device" 00:03:36.640 } 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.640 [2024-07-15 10:20:25.013646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:36.640 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.640 { 00:03:36.640 "subsystems": [ 00:03:36.640 { 00:03:36.640 "subsystem": "vfio_user_target", 00:03:36.640 "config": null 00:03:36.640 }, 00:03:36.640 { 00:03:36.640 "subsystem": "keyring", 00:03:36.640 "config": [] 00:03:36.640 }, 00:03:36.640 { 00:03:36.640 "subsystem": "iobuf", 00:03:36.640 "config": [ 00:03:36.640 { 00:03:36.640 "method": "iobuf_set_options", 00:03:36.640 "params": { 00:03:36.640 "small_pool_count": 8192, 00:03:36.640 "large_pool_count": 1024, 00:03:36.640 "small_bufsize": 8192, 00:03:36.640 "large_bufsize": 135168 00:03:36.640 } 00:03:36.640 } 00:03:36.640 ] 00:03:36.640 }, 00:03:36.640 { 00:03:36.640 "subsystem": "sock", 00:03:36.640 "config": [ 00:03:36.640 { 00:03:36.640 "method": "sock_set_default_impl", 00:03:36.640 "params": { 00:03:36.640 "impl_name": "posix" 00:03:36.640 } 00:03:36.640 }, 00:03:36.640 { 00:03:36.640 "method": "sock_impl_set_options", 00:03:36.640 "params": { 00:03:36.640 "impl_name": "ssl", 00:03:36.640 "recv_buf_size": 4096, 00:03:36.641 "send_buf_size": 4096, 00:03:36.641 "enable_recv_pipe": true, 00:03:36.641 "enable_quickack": false, 00:03:36.641 "enable_placement_id": 0, 00:03:36.641 "enable_zerocopy_send_server": true, 00:03:36.641 "enable_zerocopy_send_client": false, 00:03:36.641 "zerocopy_threshold": 0, 00:03:36.641 "tls_version": 0, 00:03:36.641 "enable_ktls": false 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "sock_impl_set_options", 00:03:36.641 "params": { 00:03:36.641 "impl_name": "posix", 00:03:36.641 "recv_buf_size": 2097152, 00:03:36.641 "send_buf_size": 2097152, 00:03:36.641 "enable_recv_pipe": true, 00:03:36.641 "enable_quickack": false, 00:03:36.641 "enable_placement_id": 0, 00:03:36.641 "enable_zerocopy_send_server": true, 00:03:36.641 "enable_zerocopy_send_client": false, 00:03:36.641 "zerocopy_threshold": 0, 00:03:36.641 "tls_version": 0, 00:03:36.641 "enable_ktls": false 00:03:36.641 } 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "vmd", 00:03:36.641 "config": [] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "accel", 00:03:36.641 "config": [ 00:03:36.641 { 00:03:36.641 "method": "accel_set_options", 00:03:36.641 "params": { 00:03:36.641 "small_cache_size": 128, 00:03:36.641 "large_cache_size": 16, 00:03:36.641 "task_count": 2048, 00:03:36.641 "sequence_count": 2048, 00:03:36.641 "buf_count": 2048 00:03:36.641 } 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "bdev", 00:03:36.641 "config": [ 00:03:36.641 { 00:03:36.641 "method": "bdev_set_options", 00:03:36.641 "params": { 00:03:36.641 "bdev_io_pool_size": 65535, 00:03:36.641 "bdev_io_cache_size": 256, 00:03:36.641 "bdev_auto_examine": true, 00:03:36.641 "iobuf_small_cache_size": 128, 00:03:36.641 "iobuf_large_cache_size": 16 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "bdev_raid_set_options", 00:03:36.641 "params": { 00:03:36.641 "process_window_size_kb": 1024 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "bdev_iscsi_set_options", 00:03:36.641 "params": { 00:03:36.641 "timeout_sec": 30 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "bdev_nvme_set_options", 00:03:36.641 "params": { 00:03:36.641 "action_on_timeout": "none", 00:03:36.641 "timeout_us": 0, 00:03:36.641 "timeout_admin_us": 0, 00:03:36.641 "keep_alive_timeout_ms": 10000, 00:03:36.641 "arbitration_burst": 0, 00:03:36.641 "low_priority_weight": 0, 00:03:36.641 "medium_priority_weight": 0, 00:03:36.641 "high_priority_weight": 0, 00:03:36.641 "nvme_adminq_poll_period_us": 10000, 00:03:36.641 "nvme_ioq_poll_period_us": 0, 00:03:36.641 "io_queue_requests": 0, 00:03:36.641 "delay_cmd_submit": true, 00:03:36.641 "transport_retry_count": 4, 00:03:36.641 "bdev_retry_count": 3, 00:03:36.641 "transport_ack_timeout": 0, 00:03:36.641 "ctrlr_loss_timeout_sec": 0, 00:03:36.641 "reconnect_delay_sec": 0, 00:03:36.641 "fast_io_fail_timeout_sec": 0, 00:03:36.641 "disable_auto_failback": false, 00:03:36.641 "generate_uuids": false, 00:03:36.641 "transport_tos": 0, 00:03:36.641 "nvme_error_stat": false, 00:03:36.641 "rdma_srq_size": 0, 00:03:36.641 "io_path_stat": false, 00:03:36.641 "allow_accel_sequence": false, 00:03:36.641 "rdma_max_cq_size": 0, 00:03:36.641 "rdma_cm_event_timeout_ms": 0, 00:03:36.641 "dhchap_digests": [ 00:03:36.641 "sha256", 00:03:36.641 "sha384", 00:03:36.641 "sha512" 00:03:36.641 ], 00:03:36.641 "dhchap_dhgroups": [ 00:03:36.641 "null", 00:03:36.641 "ffdhe2048", 00:03:36.641 "ffdhe3072", 00:03:36.641 "ffdhe4096", 00:03:36.641 "ffdhe6144", 00:03:36.641 "ffdhe8192" 00:03:36.641 ] 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "bdev_nvme_set_hotplug", 00:03:36.641 "params": { 00:03:36.641 "period_us": 100000, 00:03:36.641 "enable": false 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "bdev_wait_for_examine" 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "scsi", 00:03:36.641 "config": null 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "scheduler", 00:03:36.641 "config": [ 00:03:36.641 { 00:03:36.641 "method": "framework_set_scheduler", 00:03:36.641 "params": { 00:03:36.641 "name": "static" 00:03:36.641 } 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "vhost_scsi", 00:03:36.641 "config": [] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "vhost_blk", 00:03:36.641 "config": [] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "ublk", 00:03:36.641 "config": [] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "nbd", 00:03:36.641 "config": [] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "nvmf", 00:03:36.641 "config": [ 00:03:36.641 { 00:03:36.641 "method": "nvmf_set_config", 00:03:36.641 "params": { 00:03:36.641 "discovery_filter": "match_any", 00:03:36.641 "admin_cmd_passthru": { 00:03:36.641 "identify_ctrlr": false 00:03:36.641 } 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "nvmf_set_max_subsystems", 00:03:36.641 "params": { 00:03:36.641 "max_subsystems": 1024 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "nvmf_set_crdt", 00:03:36.641 "params": { 00:03:36.641 "crdt1": 0, 00:03:36.641 "crdt2": 0, 00:03:36.641 "crdt3": 0 00:03:36.641 } 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "method": "nvmf_create_transport", 00:03:36.641 "params": { 00:03:36.641 "trtype": "TCP", 00:03:36.641 "max_queue_depth": 128, 00:03:36.641 "max_io_qpairs_per_ctrlr": 127, 00:03:36.641 "in_capsule_data_size": 4096, 00:03:36.641 "max_io_size": 131072, 00:03:36.641 "io_unit_size": 131072, 00:03:36.641 "max_aq_depth": 128, 00:03:36.641 "num_shared_buffers": 511, 00:03:36.641 "buf_cache_size": 4294967295, 00:03:36.641 "dif_insert_or_strip": false, 00:03:36.641 "zcopy": false, 00:03:36.641 "c2h_success": true, 00:03:36.641 "sock_priority": 0, 00:03:36.641 "abort_timeout_sec": 1, 00:03:36.641 "ack_timeout": 0, 00:03:36.641 "data_wr_pool_size": 0 00:03:36.641 } 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 }, 00:03:36.641 { 00:03:36.641 "subsystem": "iscsi", 00:03:36.641 "config": [ 00:03:36.641 { 00:03:36.641 "method": "iscsi_set_options", 00:03:36.641 "params": { 00:03:36.641 "node_base": "iqn.2016-06.io.spdk", 00:03:36.641 "max_sessions": 128, 00:03:36.641 "max_connections_per_session": 2, 00:03:36.641 "max_queue_depth": 64, 00:03:36.641 "default_time2wait": 2, 00:03:36.641 "default_time2retain": 20, 00:03:36.641 "first_burst_length": 8192, 00:03:36.641 "immediate_data": true, 00:03:36.641 "allow_duplicated_isid": false, 00:03:36.641 "error_recovery_level": 0, 00:03:36.641 "nop_timeout": 60, 00:03:36.641 "nop_in_interval": 30, 00:03:36.641 "disable_chap": false, 00:03:36.641 "require_chap": false, 00:03:36.641 "mutual_chap": false, 00:03:36.641 "chap_group": 0, 00:03:36.641 "max_large_datain_per_connection": 64, 00:03:36.641 "max_r2t_per_connection": 4, 00:03:36.641 "pdu_pool_size": 36864, 00:03:36.641 "immediate_data_pool_size": 16384, 00:03:36.641 "data_out_pool_size": 2048 00:03:36.641 } 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 } 00:03:36.641 ] 00:03:36.641 } 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1077335 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1077335 ']' 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1077335 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:36.641 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077335 00:03:36.899 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:36.899 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:36.899 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077335' 00:03:36.899 killing process with pid 1077335 00:03:36.899 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1077335 00:03:36.899 10:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1077335 00:03:37.157 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1077475 00:03:37.157 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.157 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1077475 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1077475 ']' 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1077475 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077475 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077475' 00:03:42.419 killing process with pid 1077475 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1077475 00:03:42.419 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1077475 00:03:42.677 10:20:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.677 10:20:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.677 00:03:42.677 real 0m6.507s 00:03:42.677 user 0m6.146s 00:03:42.677 sys 0m0.630s 00:03:42.677 10:20:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.677 10:20:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.677 ************************************ 00:03:42.677 END TEST skip_rpc_with_json 00:03:42.677 ************************************ 00:03:42.677 10:20:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.678 10:20:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.678 ************************************ 00:03:42.678 START TEST skip_rpc_with_delay 00:03:42.678 ************************************ 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:42.678 [2024-07-15 10:20:31.156599] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:42.678 [2024-07-15 10:20:31.156708] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:42.678 00:03:42.678 real 0m0.069s 00:03:42.678 user 0m0.040s 00:03:42.678 sys 0m0.028s 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.678 10:20:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:42.678 ************************************ 00:03:42.678 END TEST skip_rpc_with_delay 00:03:42.678 ************************************ 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.678 10:20:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:42.678 10:20:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:42.678 10:20:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.678 10:20:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.678 ************************************ 00:03:42.678 START TEST exit_on_failed_rpc_init 00:03:42.678 ************************************ 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1078187 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1078187 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1078187 ']' 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:42.678 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:42.936 [2024-07-15 10:20:31.275388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:42.936 [2024-07-15 10:20:31.275483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078187 ] 00:03:42.936 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.936 [2024-07-15 10:20:31.332292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.937 [2024-07-15 10:20:31.442666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.195 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.195 [2024-07-15 10:20:31.727994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:43.195 [2024-07-15 10:20:31.728086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078199 ] 00:03:43.453 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.453 [2024-07-15 10:20:31.785158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.453 [2024-07-15 10:20:31.893739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:43.453 [2024-07-15 10:20:31.893861] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:43.453 [2024-07-15 10:20:31.893884] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:43.453 [2024-07-15 10:20:31.893895] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1078187 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1078187 ']' 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1078187 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1078187 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1078187' 00:03:43.711 killing process with pid 1078187 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1078187 00:03:43.711 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1078187 00:03:43.970 00:03:43.970 real 0m1.220s 00:03:43.970 user 0m1.370s 00:03:43.970 sys 0m0.436s 00:03:43.970 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.970 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.970 ************************************ 00:03:43.970 END TEST exit_on_failed_rpc_init 00:03:43.970 ************************************ 00:03:43.970 10:20:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:43.971 10:20:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.971 00:03:43.971 real 0m13.496s 00:03:43.971 user 0m12.811s 00:03:43.971 sys 0m1.560s 00:03:43.971 10:20:32 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.971 10:20:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.971 ************************************ 00:03:43.971 END TEST skip_rpc 00:03:43.971 ************************************ 00:03:43.971 10:20:32 -- common/autotest_common.sh@1142 -- # return 0 00:03:43.971 10:20:32 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:43.971 10:20:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.971 10:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.971 10:20:32 -- common/autotest_common.sh@10 -- # set +x 00:03:44.230 ************************************ 00:03:44.230 START TEST rpc_client 00:03:44.230 ************************************ 00:03:44.230 10:20:32 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.230 * Looking for test storage... 00:03:44.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:44.230 10:20:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:44.230 OK 00:03:44.230 10:20:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:44.230 00:03:44.230 real 0m0.069s 00:03:44.230 user 0m0.033s 00:03:44.230 sys 0m0.042s 00:03:44.230 10:20:32 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.230 10:20:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:44.230 ************************************ 00:03:44.230 END TEST rpc_client 00:03:44.230 ************************************ 00:03:44.230 10:20:32 -- common/autotest_common.sh@1142 -- # return 0 00:03:44.230 10:20:32 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.230 10:20:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.230 10:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.230 10:20:32 -- common/autotest_common.sh@10 -- # set +x 00:03:44.230 ************************************ 00:03:44.230 START TEST json_config 00:03:44.230 ************************************ 00:03:44.230 10:20:32 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.230 10:20:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.230 10:20:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.230 10:20:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.230 10:20:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.230 10:20:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.230 10:20:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.230 10:20:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.230 10:20:32 json_config -- paths/export.sh@5 -- # export PATH 00:03:44.230 10:20:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@47 -- # : 0 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.230 10:20:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.231 10:20:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.231 10:20:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:44.231 10:20:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:44.231 10:20:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:44.231 INFO: JSON configuration test init 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.231 10:20:32 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:44.231 10:20:32 json_config -- json_config/common.sh@9 -- # local app=target 00:03:44.231 10:20:32 json_config -- json_config/common.sh@10 -- # shift 00:03:44.231 10:20:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.231 10:20:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.231 10:20:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.231 10:20:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.231 10:20:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.231 10:20:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1078441 00:03:44.231 10:20:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:44.231 10:20:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.231 Waiting for target to run... 00:03:44.231 10:20:32 json_config -- json_config/common.sh@25 -- # waitforlisten 1078441 /var/tmp/spdk_tgt.sock 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 1078441 ']' 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:44.231 10:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.231 [2024-07-15 10:20:32.745392] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:44.231 [2024-07-15 10:20:32.745484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078441 ] 00:03:44.231 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.807 [2024-07-15 10:20:33.242626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.807 [2024-07-15 10:20:33.336743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.371 10:20:33 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:45.371 10:20:33 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:45.371 10:20:33 json_config -- json_config/common.sh@26 -- # echo '' 00:03:45.371 00:03:45.371 10:20:33 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:45.371 10:20:33 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:45.371 10:20:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.371 10:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.371 10:20:33 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:45.371 10:20:33 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:45.371 10:20:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:45.371 10:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.371 10:20:33 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:45.371 10:20:33 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:45.371 10:20:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:48.651 10:20:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.651 10:20:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:48.651 10:20:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:48.651 10:20:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:48.651 10:20:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:48.651 10:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:48.651 10:20:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.651 10:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:48.651 10:20:37 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:48.651 10:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:48.909 MallocForNvmf0 00:03:48.909 10:20:37 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.909 10:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:49.166 MallocForNvmf1 00:03:49.166 10:20:37 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:49.167 10:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:49.425 [2024-07-15 10:20:37.825388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.425 10:20:37 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:49.425 10:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:49.682 10:20:38 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:49.682 10:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:49.940 10:20:38 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:49.940 10:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:50.198 10:20:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:50.198 10:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:50.456 [2024-07-15 10:20:38.788393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:50.456 10:20:38 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:50.456 10:20:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.456 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.456 10:20:38 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:50.456 10:20:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.456 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.456 10:20:38 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:50.456 10:20:38 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:50.456 10:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:50.713 MallocBdevForConfigChangeCheck 00:03:50.713 10:20:39 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:50.713 10:20:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.713 10:20:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.713 10:20:39 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:50.713 10:20:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.971 10:20:39 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:50.971 INFO: shutting down applications... 00:03:50.971 10:20:39 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:50.971 10:20:39 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:50.971 10:20:39 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:50.971 10:20:39 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:52.869 Calling clear_iscsi_subsystem 00:03:52.870 Calling clear_nvmf_subsystem 00:03:52.870 Calling clear_nbd_subsystem 00:03:52.870 Calling clear_ublk_subsystem 00:03:52.870 Calling clear_vhost_blk_subsystem 00:03:52.870 Calling clear_vhost_scsi_subsystem 00:03:52.870 Calling clear_bdev_subsystem 00:03:52.870 10:20:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:52.870 10:20:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:52.870 10:20:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:52.870 10:20:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:52.870 10:20:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:52.870 10:20:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:53.127 10:20:41 json_config -- json_config/json_config.sh@345 -- # break 00:03:53.127 10:20:41 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:53.127 10:20:41 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:53.127 10:20:41 json_config -- json_config/common.sh@31 -- # local app=target 00:03:53.127 10:20:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:53.127 10:20:41 json_config -- json_config/common.sh@35 -- # [[ -n 1078441 ]] 00:03:53.127 10:20:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1078441 00:03:53.127 10:20:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:53.127 10:20:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.127 10:20:41 json_config -- json_config/common.sh@41 -- # kill -0 1078441 00:03:53.127 10:20:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:53.715 10:20:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:53.715 10:20:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.715 10:20:41 json_config -- json_config/common.sh@41 -- # kill -0 1078441 00:03:53.715 10:20:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:53.715 10:20:41 json_config -- json_config/common.sh@43 -- # break 00:03:53.715 10:20:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:53.715 10:20:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:53.715 SPDK target shutdown done 00:03:53.715 10:20:41 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:53.715 INFO: relaunching applications... 00:03:53.715 10:20:41 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.715 10:20:41 json_config -- json_config/common.sh@9 -- # local app=target 00:03:53.715 10:20:41 json_config -- json_config/common.sh@10 -- # shift 00:03:53.715 10:20:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:53.715 10:20:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:53.716 10:20:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.716 10:20:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.716 10:20:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.716 10:20:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1079634 00:03:53.716 10:20:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.716 10:20:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.716 Waiting for target to run... 00:03:53.716 10:20:41 json_config -- json_config/common.sh@25 -- # waitforlisten 1079634 /var/tmp/spdk_tgt.sock 00:03:53.716 10:20:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 1079634 ']' 00:03:53.716 10:20:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.716 10:20:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:53.716 10:20:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.716 10:20:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:53.716 10:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.716 [2024-07-15 10:20:42.034404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:53.716 [2024-07-15 10:20:42.034485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079634 ] 00:03:53.716 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.289 [2024-07-15 10:20:42.556487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.289 [2024-07-15 10:20:42.647750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.576 [2024-07-15 10:20:45.675002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.576 [2024-07-15 10:20:45.707420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:58.141 10:20:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.141 10:20:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:58.141 10:20:46 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.141 00:03:58.141 10:20:46 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:58.141 10:20:46 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:58.141 INFO: Checking if target configuration is the same... 00:03:58.141 10:20:46 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.141 10:20:46 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:58.141 10:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.141 + '[' 2 -ne 2 ']' 00:03:58.141 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:58.141 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:58.141 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.141 +++ basename /dev/fd/62 00:03:58.141 ++ mktemp /tmp/62.XXX 00:03:58.141 + tmp_file_1=/tmp/62.CnB 00:03:58.141 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.141 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.141 + tmp_file_2=/tmp/spdk_tgt_config.json.Uly 00:03:58.141 + ret=0 00:03:58.141 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.399 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.399 + diff -u /tmp/62.CnB /tmp/spdk_tgt_config.json.Uly 00:03:58.399 + echo 'INFO: JSON config files are the same' 00:03:58.399 INFO: JSON config files are the same 00:03:58.399 + rm /tmp/62.CnB /tmp/spdk_tgt_config.json.Uly 00:03:58.399 + exit 0 00:03:58.399 10:20:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:58.399 10:20:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:58.399 INFO: changing configuration and checking if this can be detected... 00:03:58.399 10:20:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.399 10:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.657 10:20:47 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.657 10:20:47 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:58.657 10:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.657 + '[' 2 -ne 2 ']' 00:03:58.657 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:58.657 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:58.657 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.657 +++ basename /dev/fd/62 00:03:58.657 ++ mktemp /tmp/62.XXX 00:03:58.657 + tmp_file_1=/tmp/62.lXj 00:03:58.657 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.657 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.657 + tmp_file_2=/tmp/spdk_tgt_config.json.Q1N 00:03:58.657 + ret=0 00:03:58.657 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.223 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.223 + diff -u /tmp/62.lXj /tmp/spdk_tgt_config.json.Q1N 00:03:59.223 + ret=1 00:03:59.224 + echo '=== Start of file: /tmp/62.lXj ===' 00:03:59.224 + cat /tmp/62.lXj 00:03:59.224 + echo '=== End of file: /tmp/62.lXj ===' 00:03:59.224 + echo '' 00:03:59.224 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Q1N ===' 00:03:59.224 + cat /tmp/spdk_tgt_config.json.Q1N 00:03:59.224 + echo '=== End of file: /tmp/spdk_tgt_config.json.Q1N ===' 00:03:59.224 + echo '' 00:03:59.224 + rm /tmp/62.lXj /tmp/spdk_tgt_config.json.Q1N 00:03:59.224 + exit 1 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:59.224 INFO: configuration change detected. 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@317 -- # [[ -n 1079634 ]] 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.224 10:20:47 json_config -- json_config/json_config.sh@323 -- # killprocess 1079634 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 1079634 ']' 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@952 -- # kill -0 1079634 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@953 -- # uname 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1079634 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1079634' 00:03:59.224 killing process with pid 1079634 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@967 -- # kill 1079634 00:03:59.224 10:20:47 json_config -- common/autotest_common.sh@972 -- # wait 1079634 00:04:01.123 10:20:49 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.123 10:20:49 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:01.123 10:20:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.123 10:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.123 10:20:49 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:01.123 10:20:49 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:01.123 INFO: Success 00:04:01.123 00:04:01.123 real 0m16.592s 00:04:01.123 user 0m18.265s 00:04:01.123 sys 0m2.284s 00:04:01.123 10:20:49 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.123 10:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.123 ************************************ 00:04:01.123 END TEST json_config 00:04:01.123 ************************************ 00:04:01.123 10:20:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.123 10:20:49 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.123 10:20:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.123 10:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.123 10:20:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.123 ************************************ 00:04:01.123 START TEST json_config_extra_key 00:04:01.123 ************************************ 00:04:01.123 10:20:49 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.123 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.123 10:20:49 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:01.123 10:20:49 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.123 10:20:49 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.124 10:20:49 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.124 10:20:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.124 10:20:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.124 10:20:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.124 10:20:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:01.124 10:20:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:01.124 10:20:49 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:01.124 INFO: launching applications... 00:04:01.124 10:20:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1080668 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.124 Waiting for target to run... 00:04:01.124 10:20:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1080668 /var/tmp/spdk_tgt.sock 00:04:01.124 10:20:49 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1080668 ']' 00:04:01.124 10:20:49 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.124 10:20:49 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.124 10:20:49 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.124 10:20:49 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.124 10:20:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:01.124 [2024-07-15 10:20:49.382523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:01.124 [2024-07-15 10:20:49.382607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080668 ] 00:04:01.124 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.382 [2024-07-15 10:20:49.874030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.641 [2024-07-15 10:20:49.967663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.899 10:20:50 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.899 10:20:50 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:01.899 00:04:01.899 10:20:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:01.899 INFO: shutting down applications... 00:04:01.899 10:20:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1080668 ]] 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1080668 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1080668 00:04:01.899 10:20:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:02.464 10:20:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:02.464 10:20:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.465 10:20:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1080668 00:04:02.465 10:20:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:02.465 10:20:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:02.465 10:20:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:02.465 10:20:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:02.465 SPDK target shutdown done 00:04:02.465 10:20:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:02.465 Success 00:04:02.465 00:04:02.465 real 0m1.555s 00:04:02.465 user 0m1.375s 00:04:02.465 sys 0m0.617s 00:04:02.465 10:20:50 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.465 10:20:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.465 ************************************ 00:04:02.465 END TEST json_config_extra_key 00:04:02.465 ************************************ 00:04:02.465 10:20:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.465 10:20:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.465 10:20:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.465 10:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.465 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:04:02.465 ************************************ 00:04:02.465 START TEST alias_rpc 00:04:02.465 ************************************ 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.465 * Looking for test storage... 00:04:02.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:02.465 10:20:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:02.465 10:20:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1080867 00:04:02.465 10:20:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.465 10:20:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1080867 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1080867 ']' 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.465 10:20:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.465 [2024-07-15 10:20:50.988626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:02.465 [2024-07-15 10:20:50.988707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080867 ] 00:04:02.723 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.723 [2024-07-15 10:20:51.045520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.723 [2024-07-15 10:20:51.150478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.005 10:20:51 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.005 10:20:51 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:03.005 10:20:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:03.262 10:20:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1080867 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1080867 ']' 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1080867 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1080867 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1080867' 00:04:03.262 killing process with pid 1080867 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@967 -- # kill 1080867 00:04:03.262 10:20:51 alias_rpc -- common/autotest_common.sh@972 -- # wait 1080867 00:04:03.828 00:04:03.828 real 0m1.210s 00:04:03.828 user 0m1.302s 00:04:03.828 sys 0m0.385s 00:04:03.828 10:20:52 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.828 10:20:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.828 ************************************ 00:04:03.828 END TEST alias_rpc 00:04:03.828 ************************************ 00:04:03.828 10:20:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:03.828 10:20:52 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:03.828 10:20:52 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.828 10:20:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.828 10:20:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.828 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:04:03.828 ************************************ 00:04:03.828 START TEST spdkcli_tcp 00:04:03.828 ************************************ 00:04:03.828 10:20:52 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.828 * Looking for test storage... 00:04:03.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:03.828 10:20:52 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.828 10:20:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1081052 00:04:03.828 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:03.829 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1081052 00:04:03.829 10:20:52 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1081052 ']' 00:04:03.829 10:20:52 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.829 10:20:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:03.829 10:20:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.829 10:20:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:03.829 10:20:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.829 [2024-07-15 10:20:52.249302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:03.829 [2024-07-15 10:20:52.249386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081052 ] 00:04:03.829 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.829 [2024-07-15 10:20:52.306592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:04.086 [2024-07-15 10:20:52.415881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.086 [2024-07-15 10:20:52.415885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.342 10:20:52 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:04.342 10:20:52 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:04.343 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1081178 00:04:04.343 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:04.343 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:04.600 [ 00:04:04.600 "bdev_malloc_delete", 00:04:04.600 "bdev_malloc_create", 00:04:04.600 "bdev_null_resize", 00:04:04.600 "bdev_null_delete", 00:04:04.600 "bdev_null_create", 00:04:04.600 "bdev_nvme_cuse_unregister", 00:04:04.600 "bdev_nvme_cuse_register", 00:04:04.600 "bdev_opal_new_user", 00:04:04.600 "bdev_opal_set_lock_state", 00:04:04.600 "bdev_opal_delete", 00:04:04.600 "bdev_opal_get_info", 00:04:04.600 "bdev_opal_create", 00:04:04.600 "bdev_nvme_opal_revert", 00:04:04.600 "bdev_nvme_opal_init", 00:04:04.600 "bdev_nvme_send_cmd", 00:04:04.600 "bdev_nvme_get_path_iostat", 00:04:04.600 "bdev_nvme_get_mdns_discovery_info", 00:04:04.600 "bdev_nvme_stop_mdns_discovery", 00:04:04.600 "bdev_nvme_start_mdns_discovery", 00:04:04.600 "bdev_nvme_set_multipath_policy", 00:04:04.600 "bdev_nvme_set_preferred_path", 00:04:04.600 "bdev_nvme_get_io_paths", 00:04:04.600 "bdev_nvme_remove_error_injection", 00:04:04.600 "bdev_nvme_add_error_injection", 00:04:04.600 "bdev_nvme_get_discovery_info", 00:04:04.600 "bdev_nvme_stop_discovery", 00:04:04.600 "bdev_nvme_start_discovery", 00:04:04.600 "bdev_nvme_get_controller_health_info", 00:04:04.600 "bdev_nvme_disable_controller", 00:04:04.600 "bdev_nvme_enable_controller", 00:04:04.600 "bdev_nvme_reset_controller", 00:04:04.600 "bdev_nvme_get_transport_statistics", 00:04:04.600 "bdev_nvme_apply_firmware", 00:04:04.600 "bdev_nvme_detach_controller", 00:04:04.600 "bdev_nvme_get_controllers", 00:04:04.600 "bdev_nvme_attach_controller", 00:04:04.600 "bdev_nvme_set_hotplug", 00:04:04.600 "bdev_nvme_set_options", 00:04:04.600 "bdev_passthru_delete", 00:04:04.600 "bdev_passthru_create", 00:04:04.600 "bdev_lvol_set_parent_bdev", 00:04:04.600 "bdev_lvol_set_parent", 00:04:04.600 "bdev_lvol_check_shallow_copy", 00:04:04.600 "bdev_lvol_start_shallow_copy", 00:04:04.600 "bdev_lvol_grow_lvstore", 00:04:04.600 "bdev_lvol_get_lvols", 00:04:04.600 "bdev_lvol_get_lvstores", 00:04:04.600 "bdev_lvol_delete", 00:04:04.600 "bdev_lvol_set_read_only", 00:04:04.600 "bdev_lvol_resize", 00:04:04.600 "bdev_lvol_decouple_parent", 00:04:04.600 "bdev_lvol_inflate", 00:04:04.600 "bdev_lvol_rename", 00:04:04.600 "bdev_lvol_clone_bdev", 00:04:04.600 "bdev_lvol_clone", 00:04:04.600 "bdev_lvol_snapshot", 00:04:04.600 "bdev_lvol_create", 00:04:04.600 "bdev_lvol_delete_lvstore", 00:04:04.600 "bdev_lvol_rename_lvstore", 00:04:04.600 "bdev_lvol_create_lvstore", 00:04:04.600 "bdev_raid_set_options", 00:04:04.600 "bdev_raid_remove_base_bdev", 00:04:04.600 "bdev_raid_add_base_bdev", 00:04:04.600 "bdev_raid_delete", 00:04:04.600 "bdev_raid_create", 00:04:04.600 "bdev_raid_get_bdevs", 00:04:04.600 "bdev_error_inject_error", 00:04:04.600 "bdev_error_delete", 00:04:04.600 "bdev_error_create", 00:04:04.601 "bdev_split_delete", 00:04:04.601 "bdev_split_create", 00:04:04.601 "bdev_delay_delete", 00:04:04.601 "bdev_delay_create", 00:04:04.601 "bdev_delay_update_latency", 00:04:04.601 "bdev_zone_block_delete", 00:04:04.601 "bdev_zone_block_create", 00:04:04.601 "blobfs_create", 00:04:04.601 "blobfs_detect", 00:04:04.601 "blobfs_set_cache_size", 00:04:04.601 "bdev_aio_delete", 00:04:04.601 "bdev_aio_rescan", 00:04:04.601 "bdev_aio_create", 00:04:04.601 "bdev_ftl_set_property", 00:04:04.601 "bdev_ftl_get_properties", 00:04:04.601 "bdev_ftl_get_stats", 00:04:04.601 "bdev_ftl_unmap", 00:04:04.601 "bdev_ftl_unload", 00:04:04.601 "bdev_ftl_delete", 00:04:04.601 "bdev_ftl_load", 00:04:04.601 "bdev_ftl_create", 00:04:04.601 "bdev_virtio_attach_controller", 00:04:04.601 "bdev_virtio_scsi_get_devices", 00:04:04.601 "bdev_virtio_detach_controller", 00:04:04.601 "bdev_virtio_blk_set_hotplug", 00:04:04.601 "bdev_iscsi_delete", 00:04:04.601 "bdev_iscsi_create", 00:04:04.601 "bdev_iscsi_set_options", 00:04:04.601 "accel_error_inject_error", 00:04:04.601 "ioat_scan_accel_module", 00:04:04.601 "dsa_scan_accel_module", 00:04:04.601 "iaa_scan_accel_module", 00:04:04.601 "vfu_virtio_create_scsi_endpoint", 00:04:04.601 "vfu_virtio_scsi_remove_target", 00:04:04.601 "vfu_virtio_scsi_add_target", 00:04:04.601 "vfu_virtio_create_blk_endpoint", 00:04:04.601 "vfu_virtio_delete_endpoint", 00:04:04.601 "keyring_file_remove_key", 00:04:04.601 "keyring_file_add_key", 00:04:04.601 "keyring_linux_set_options", 00:04:04.601 "iscsi_get_histogram", 00:04:04.601 "iscsi_enable_histogram", 00:04:04.601 "iscsi_set_options", 00:04:04.601 "iscsi_get_auth_groups", 00:04:04.601 "iscsi_auth_group_remove_secret", 00:04:04.601 "iscsi_auth_group_add_secret", 00:04:04.601 "iscsi_delete_auth_group", 00:04:04.601 "iscsi_create_auth_group", 00:04:04.601 "iscsi_set_discovery_auth", 00:04:04.601 "iscsi_get_options", 00:04:04.601 "iscsi_target_node_request_logout", 00:04:04.601 "iscsi_target_node_set_redirect", 00:04:04.601 "iscsi_target_node_set_auth", 00:04:04.601 "iscsi_target_node_add_lun", 00:04:04.601 "iscsi_get_stats", 00:04:04.601 "iscsi_get_connections", 00:04:04.601 "iscsi_portal_group_set_auth", 00:04:04.601 "iscsi_start_portal_group", 00:04:04.601 "iscsi_delete_portal_group", 00:04:04.601 "iscsi_create_portal_group", 00:04:04.601 "iscsi_get_portal_groups", 00:04:04.601 "iscsi_delete_target_node", 00:04:04.601 "iscsi_target_node_remove_pg_ig_maps", 00:04:04.601 "iscsi_target_node_add_pg_ig_maps", 00:04:04.601 "iscsi_create_target_node", 00:04:04.601 "iscsi_get_target_nodes", 00:04:04.601 "iscsi_delete_initiator_group", 00:04:04.601 "iscsi_initiator_group_remove_initiators", 00:04:04.601 "iscsi_initiator_group_add_initiators", 00:04:04.601 "iscsi_create_initiator_group", 00:04:04.601 "iscsi_get_initiator_groups", 00:04:04.601 "nvmf_set_crdt", 00:04:04.601 "nvmf_set_config", 00:04:04.601 "nvmf_set_max_subsystems", 00:04:04.601 "nvmf_stop_mdns_prr", 00:04:04.601 "nvmf_publish_mdns_prr", 00:04:04.601 "nvmf_subsystem_get_listeners", 00:04:04.601 "nvmf_subsystem_get_qpairs", 00:04:04.601 "nvmf_subsystem_get_controllers", 00:04:04.601 "nvmf_get_stats", 00:04:04.601 "nvmf_get_transports", 00:04:04.601 "nvmf_create_transport", 00:04:04.601 "nvmf_get_targets", 00:04:04.601 "nvmf_delete_target", 00:04:04.601 "nvmf_create_target", 00:04:04.601 "nvmf_subsystem_allow_any_host", 00:04:04.601 "nvmf_subsystem_remove_host", 00:04:04.601 "nvmf_subsystem_add_host", 00:04:04.601 "nvmf_ns_remove_host", 00:04:04.601 "nvmf_ns_add_host", 00:04:04.601 "nvmf_subsystem_remove_ns", 00:04:04.601 "nvmf_subsystem_add_ns", 00:04:04.601 "nvmf_subsystem_listener_set_ana_state", 00:04:04.601 "nvmf_discovery_get_referrals", 00:04:04.601 "nvmf_discovery_remove_referral", 00:04:04.601 "nvmf_discovery_add_referral", 00:04:04.601 "nvmf_subsystem_remove_listener", 00:04:04.601 "nvmf_subsystem_add_listener", 00:04:04.601 "nvmf_delete_subsystem", 00:04:04.601 "nvmf_create_subsystem", 00:04:04.601 "nvmf_get_subsystems", 00:04:04.601 "env_dpdk_get_mem_stats", 00:04:04.601 "nbd_get_disks", 00:04:04.601 "nbd_stop_disk", 00:04:04.601 "nbd_start_disk", 00:04:04.601 "ublk_recover_disk", 00:04:04.601 "ublk_get_disks", 00:04:04.601 "ublk_stop_disk", 00:04:04.601 "ublk_start_disk", 00:04:04.601 "ublk_destroy_target", 00:04:04.601 "ublk_create_target", 00:04:04.601 "virtio_blk_create_transport", 00:04:04.601 "virtio_blk_get_transports", 00:04:04.601 "vhost_controller_set_coalescing", 00:04:04.601 "vhost_get_controllers", 00:04:04.601 "vhost_delete_controller", 00:04:04.601 "vhost_create_blk_controller", 00:04:04.601 "vhost_scsi_controller_remove_target", 00:04:04.601 "vhost_scsi_controller_add_target", 00:04:04.601 "vhost_start_scsi_controller", 00:04:04.601 "vhost_create_scsi_controller", 00:04:04.601 "thread_set_cpumask", 00:04:04.601 "framework_get_governor", 00:04:04.601 "framework_get_scheduler", 00:04:04.601 "framework_set_scheduler", 00:04:04.601 "framework_get_reactors", 00:04:04.601 "thread_get_io_channels", 00:04:04.601 "thread_get_pollers", 00:04:04.601 "thread_get_stats", 00:04:04.601 "framework_monitor_context_switch", 00:04:04.601 "spdk_kill_instance", 00:04:04.601 "log_enable_timestamps", 00:04:04.601 "log_get_flags", 00:04:04.601 "log_clear_flag", 00:04:04.601 "log_set_flag", 00:04:04.601 "log_get_level", 00:04:04.601 "log_set_level", 00:04:04.601 "log_get_print_level", 00:04:04.601 "log_set_print_level", 00:04:04.601 "framework_enable_cpumask_locks", 00:04:04.601 "framework_disable_cpumask_locks", 00:04:04.601 "framework_wait_init", 00:04:04.601 "framework_start_init", 00:04:04.601 "scsi_get_devices", 00:04:04.601 "bdev_get_histogram", 00:04:04.601 "bdev_enable_histogram", 00:04:04.601 "bdev_set_qos_limit", 00:04:04.601 "bdev_set_qd_sampling_period", 00:04:04.601 "bdev_get_bdevs", 00:04:04.601 "bdev_reset_iostat", 00:04:04.601 "bdev_get_iostat", 00:04:04.601 "bdev_examine", 00:04:04.601 "bdev_wait_for_examine", 00:04:04.601 "bdev_set_options", 00:04:04.601 "notify_get_notifications", 00:04:04.601 "notify_get_types", 00:04:04.601 "accel_get_stats", 00:04:04.601 "accel_set_options", 00:04:04.601 "accel_set_driver", 00:04:04.601 "accel_crypto_key_destroy", 00:04:04.601 "accel_crypto_keys_get", 00:04:04.601 "accel_crypto_key_create", 00:04:04.601 "accel_assign_opc", 00:04:04.601 "accel_get_module_info", 00:04:04.601 "accel_get_opc_assignments", 00:04:04.601 "vmd_rescan", 00:04:04.601 "vmd_remove_device", 00:04:04.601 "vmd_enable", 00:04:04.601 "sock_get_default_impl", 00:04:04.601 "sock_set_default_impl", 00:04:04.601 "sock_impl_set_options", 00:04:04.601 "sock_impl_get_options", 00:04:04.601 "iobuf_get_stats", 00:04:04.601 "iobuf_set_options", 00:04:04.601 "keyring_get_keys", 00:04:04.601 "framework_get_pci_devices", 00:04:04.601 "framework_get_config", 00:04:04.601 "framework_get_subsystems", 00:04:04.601 "vfu_tgt_set_base_path", 00:04:04.601 "trace_get_info", 00:04:04.601 "trace_get_tpoint_group_mask", 00:04:04.601 "trace_disable_tpoint_group", 00:04:04.601 "trace_enable_tpoint_group", 00:04:04.601 "trace_clear_tpoint_mask", 00:04:04.601 "trace_set_tpoint_mask", 00:04:04.601 "spdk_get_version", 00:04:04.601 "rpc_get_methods" 00:04:04.601 ] 00:04:04.601 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.601 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:04.601 10:20:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1081052 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1081052 ']' 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1081052 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1081052 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1081052' 00:04:04.601 killing process with pid 1081052 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1081052 00:04:04.601 10:20:52 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1081052 00:04:04.858 00:04:04.858 real 0m1.260s 00:04:04.858 user 0m2.214s 00:04:04.858 sys 0m0.451s 00:04:04.858 10:20:53 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.858 10:20:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.858 ************************************ 00:04:04.858 END TEST spdkcli_tcp 00:04:04.858 ************************************ 00:04:05.114 10:20:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:05.114 10:20:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.114 10:20:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.114 10:20:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.114 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:04:05.114 ************************************ 00:04:05.114 START TEST dpdk_mem_utility 00:04:05.114 ************************************ 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.114 * Looking for test storage... 00:04:05.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:05.114 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.114 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1081329 00:04:05.114 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.114 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1081329 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1081329 ']' 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.114 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.114 [2024-07-15 10:20:53.551056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:05.114 [2024-07-15 10:20:53.551166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081329 ] 00:04:05.114 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.114 [2024-07-15 10:20:53.611202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.370 [2024-07-15 10:20:53.718843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.628 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.628 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:05.628 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:05.628 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:05.628 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.628 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.628 { 00:04:05.628 "filename": "/tmp/spdk_mem_dump.txt" 00:04:05.628 } 00:04:05.628 10:20:53 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.628 10:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.628 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:05.628 1 heaps totaling size 814.000000 MiB 00:04:05.628 size: 814.000000 MiB heap id: 0 00:04:05.628 end heaps---------- 00:04:05.628 8 mempools totaling size 598.116089 MiB 00:04:05.628 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:05.628 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:05.628 size: 84.521057 MiB name: bdev_io_1081329 00:04:05.628 size: 51.011292 MiB name: evtpool_1081329 00:04:05.628 size: 50.003479 MiB name: msgpool_1081329 00:04:05.628 size: 21.763794 MiB name: PDU_Pool 00:04:05.628 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:05.628 size: 0.026123 MiB name: Session_Pool 00:04:05.628 end mempools------- 00:04:05.628 6 memzones totaling size 4.142822 MiB 00:04:05.628 size: 1.000366 MiB name: RG_ring_0_1081329 00:04:05.628 size: 1.000366 MiB name: RG_ring_1_1081329 00:04:05.628 size: 1.000366 MiB name: RG_ring_4_1081329 00:04:05.628 size: 1.000366 MiB name: RG_ring_5_1081329 00:04:05.628 size: 0.125366 MiB name: RG_ring_2_1081329 00:04:05.628 size: 0.015991 MiB name: RG_ring_3_1081329 00:04:05.628 end memzones------- 00:04:05.628 10:20:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:05.628 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:05.628 list of free elements. size: 12.519348 MiB 00:04:05.628 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:05.628 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:05.628 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:05.628 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:05.628 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:05.628 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:05.628 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:05.628 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:05.628 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:05.628 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:05.628 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:05.628 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:05.628 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:05.628 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:05.628 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:05.628 list of standard malloc elements. size: 199.218079 MiB 00:04:05.628 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:05.628 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:05.628 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:05.628 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:05.628 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:05.628 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:05.628 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:05.628 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:05.628 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:05.628 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:05.628 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:05.628 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:05.628 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:05.628 list of memzone associated elements. size: 602.262573 MiB 00:04:05.628 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:05.628 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:05.628 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:05.628 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:05.628 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:05.628 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1081329_0 00:04:05.628 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:05.628 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1081329_0 00:04:05.628 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:05.628 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1081329_0 00:04:05.628 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:05.628 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:05.628 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:05.628 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:05.628 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:05.628 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1081329 00:04:05.628 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:05.628 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1081329 00:04:05.628 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:05.628 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1081329 00:04:05.628 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:05.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:05.628 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:05.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:05.628 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:05.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:05.628 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:05.628 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:05.628 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:05.628 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1081329 00:04:05.628 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:05.628 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1081329 00:04:05.628 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:05.628 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1081329 00:04:05.628 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:05.628 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1081329 00:04:05.628 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:05.628 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1081329 00:04:05.628 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:05.628 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:05.628 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:05.628 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:05.628 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:05.628 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:05.628 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:05.628 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1081329 00:04:05.628 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:05.628 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:05.628 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:05.628 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:05.628 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:05.628 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1081329 00:04:05.628 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:05.628 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:05.628 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:05.628 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1081329 00:04:05.628 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:05.628 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1081329 00:04:05.628 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:05.628 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:05.628 10:20:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:05.629 10:20:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1081329 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1081329 ']' 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1081329 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1081329 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1081329' 00:04:05.629 killing process with pid 1081329 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1081329 00:04:05.629 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1081329 00:04:06.193 00:04:06.193 real 0m1.063s 00:04:06.193 user 0m1.040s 00:04:06.193 sys 0m0.378s 00:04:06.193 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.193 10:20:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.193 ************************************ 00:04:06.193 END TEST dpdk_mem_utility 00:04:06.193 ************************************ 00:04:06.193 10:20:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.193 10:20:54 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.193 10:20:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.193 10:20:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.193 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:04:06.193 ************************************ 00:04:06.193 START TEST event 00:04:06.193 ************************************ 00:04:06.193 10:20:54 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.193 * Looking for test storage... 00:04:06.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:06.193 10:20:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:06.193 10:20:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:06.194 10:20:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.194 10:20:54 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:06.194 10:20:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.194 10:20:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.194 ************************************ 00:04:06.194 START TEST event_perf 00:04:06.194 ************************************ 00:04:06.194 10:20:54 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.194 Running I/O for 1 seconds...[2024-07-15 10:20:54.653299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:06.194 [2024-07-15 10:20:54.653364] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081534 ] 00:04:06.194 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.194 [2024-07-15 10:20:54.710757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.450 [2024-07-15 10:20:54.814238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.451 [2024-07-15 10:20:54.814296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.451 [2024-07-15 10:20:54.814362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.451 [2024-07-15 10:20:54.814365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.397 Running I/O for 1 seconds... 00:04:07.397 lcore 0: 230366 00:04:07.397 lcore 1: 230366 00:04:07.397 lcore 2: 230366 00:04:07.397 lcore 3: 230366 00:04:07.397 done. 00:04:07.397 00:04:07.397 real 0m1.285s 00:04:07.397 user 0m4.210s 00:04:07.397 sys 0m0.071s 00:04:07.397 10:20:55 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.397 10:20:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.397 ************************************ 00:04:07.397 END TEST event_perf 00:04:07.397 ************************************ 00:04:07.655 10:20:55 event -- common/autotest_common.sh@1142 -- # return 0 00:04:07.655 10:20:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.655 10:20:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:07.655 10:20:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.655 10:20:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.655 ************************************ 00:04:07.655 START TEST event_reactor 00:04:07.655 ************************************ 00:04:07.655 10:20:55 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.655 [2024-07-15 10:20:55.990792] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:07.655 [2024-07-15 10:20:55.990879] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081719 ] 00:04:07.655 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.655 [2024-07-15 10:20:56.051807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.655 [2024-07-15 10:20:56.152621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.024 test_start 00:04:09.024 oneshot 00:04:09.024 tick 100 00:04:09.024 tick 100 00:04:09.024 tick 250 00:04:09.024 tick 100 00:04:09.024 tick 100 00:04:09.024 tick 100 00:04:09.024 tick 250 00:04:09.024 tick 500 00:04:09.024 tick 100 00:04:09.024 tick 100 00:04:09.024 tick 250 00:04:09.024 tick 100 00:04:09.024 tick 100 00:04:09.024 test_end 00:04:09.024 00:04:09.024 real 0m1.285s 00:04:09.024 user 0m1.202s 00:04:09.024 sys 0m0.078s 00:04:09.024 10:20:57 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.024 10:20:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:09.024 ************************************ 00:04:09.024 END TEST event_reactor 00:04:09.024 ************************************ 00:04:09.024 10:20:57 event -- common/autotest_common.sh@1142 -- # return 0 00:04:09.024 10:20:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.024 10:20:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:09.024 10:20:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.024 10:20:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.024 ************************************ 00:04:09.024 START TEST event_reactor_perf 00:04:09.024 ************************************ 00:04:09.024 10:20:57 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.024 [2024-07-15 10:20:57.328520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:09.024 [2024-07-15 10:20:57.328587] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081873 ] 00:04:09.024 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.024 [2024-07-15 10:20:57.386883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.024 [2024-07-15 10:20:57.495388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.395 test_start 00:04:10.395 test_end 00:04:10.395 Performance: 452804 events per second 00:04:10.395 00:04:10.395 real 0m1.293s 00:04:10.395 user 0m1.220s 00:04:10.395 sys 0m0.069s 00:04:10.395 10:20:58 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.395 10:20:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:10.395 ************************************ 00:04:10.395 END TEST event_reactor_perf 00:04:10.395 ************************************ 00:04:10.395 10:20:58 event -- common/autotest_common.sh@1142 -- # return 0 00:04:10.395 10:20:58 event -- event/event.sh@49 -- # uname -s 00:04:10.395 10:20:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:10.395 10:20:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:10.395 10:20:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.395 10:20:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.395 10:20:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.395 ************************************ 00:04:10.395 START TEST event_scheduler 00:04:10.395 ************************************ 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:10.395 * Looking for test storage... 00:04:10.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:10.395 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:10.395 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1082066 00:04:10.395 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.395 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:10.395 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1082066 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1082066 ']' 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.395 10:20:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.395 [2024-07-15 10:20:58.759935] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:10.395 [2024-07-15 10:20:58.760024] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082066 ] 00:04:10.395 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.395 [2024-07-15 10:20:58.822723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.395 [2024-07-15 10:20:58.937435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.395 [2024-07-15 10:20:58.937498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.395 [2024-07-15 10:20:58.937534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.395 [2024-07-15 10:20:58.937537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:10.653 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 [2024-07-15 10:20:58.962296] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:10.653 [2024-07-15 10:20:58.962321] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:10.653 [2024-07-15 10:20:58.962338] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:10.653 [2024-07-15 10:20:58.962349] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:10.653 [2024-07-15 10:20:58.962358] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 [2024-07-15 10:20:59.057297] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:10.653 10:20:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:10.653 10:20:59 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.653 10:20:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 ************************************ 00:04:10.653 START TEST scheduler_create_thread 00:04:10.653 ************************************ 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 2 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 3 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 4 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 5 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 6 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 7 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 8 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.653 9 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.653 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.654 10 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.654 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.218 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.218 00:04:11.218 real 0m0.591s 00:04:11.218 user 0m0.009s 00:04:11.218 sys 0m0.004s 00:04:11.218 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.218 10:20:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.218 ************************************ 00:04:11.218 END TEST scheduler_create_thread 00:04:11.218 ************************************ 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:11.218 10:20:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:11.218 10:20:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1082066 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1082066 ']' 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1082066 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1082066 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1082066' 00:04:11.218 killing process with pid 1082066 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1082066 00:04:11.218 10:20:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1082066 00:04:11.783 [2024-07-15 10:21:00.157537] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:12.041 00:04:12.041 real 0m1.748s 00:04:12.041 user 0m2.127s 00:04:12.041 sys 0m0.312s 00:04:12.041 10:21:00 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.041 10:21:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.041 ************************************ 00:04:12.041 END TEST event_scheduler 00:04:12.041 ************************************ 00:04:12.041 10:21:00 event -- common/autotest_common.sh@1142 -- # return 0 00:04:12.041 10:21:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:12.041 10:21:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:12.041 10:21:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.041 10:21:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.041 10:21:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.041 ************************************ 00:04:12.041 START TEST app_repeat 00:04:12.041 ************************************ 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1082399 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1082399' 00:04:12.041 Process app_repeat pid: 1082399 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:12.041 spdk_app_start Round 0 00:04:12.041 10:21:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1082399 /var/tmp/spdk-nbd.sock 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1082399 ']' 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:12.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.041 10:21:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:12.041 [2024-07-15 10:21:00.485011] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:12.041 [2024-07-15 10:21:00.485068] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082399 ] 00:04:12.041 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.041 [2024-07-15 10:21:00.543231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.299 [2024-07-15 10:21:00.649464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.299 [2024-07-15 10:21:00.649468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.299 10:21:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.299 10:21:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:12.299 10:21:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.556 Malloc0 00:04:12.556 10:21:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.814 Malloc1 00:04:12.815 10:21:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.815 10:21:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:13.072 /dev/nbd0 00:04:13.072 10:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:13.072 10:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.072 1+0 records in 00:04:13.072 1+0 records out 00:04:13.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001597 s, 25.6 MB/s 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:13.072 10:21:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:13.072 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.072 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.072 10:21:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.329 /dev/nbd1 00:04:13.329 10:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.329 10:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.329 1+0 records in 00:04:13.329 1+0 records out 00:04:13.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223087 s, 18.4 MB/s 00:04:13.329 10:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.587 10:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:13.587 10:21:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.587 10:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:13.587 10:21:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:13.587 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.587 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.587 10:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.587 10:21:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.587 10:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.587 10:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.587 { 00:04:13.587 "nbd_device": "/dev/nbd0", 00:04:13.587 "bdev_name": "Malloc0" 00:04:13.587 }, 00:04:13.587 { 00:04:13.587 "nbd_device": "/dev/nbd1", 00:04:13.587 "bdev_name": "Malloc1" 00:04:13.587 } 00:04:13.587 ]' 00:04:13.587 10:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.587 { 00:04:13.587 "nbd_device": "/dev/nbd0", 00:04:13.587 "bdev_name": "Malloc0" 00:04:13.587 }, 00:04:13.587 { 00:04:13.587 "nbd_device": "/dev/nbd1", 00:04:13.587 "bdev_name": "Malloc1" 00:04:13.587 } 00:04:13.587 ]' 00:04:13.587 10:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.844 /dev/nbd1' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.844 /dev/nbd1' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.844 256+0 records in 00:04:13.844 256+0 records out 00:04:13.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495553 s, 212 MB/s 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.844 256+0 records in 00:04:13.844 256+0 records out 00:04:13.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021033 s, 49.9 MB/s 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.844 256+0 records in 00:04:13.844 256+0 records out 00:04:13.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231642 s, 45.3 MB/s 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.844 10:21:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.101 10:21:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.358 10:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.616 10:21:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.616 10:21:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.874 10:21:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:15.131 [2024-07-15 10:21:03.600853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.389 [2024-07-15 10:21:03.712515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.389 [2024-07-15 10:21:03.712515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.389 [2024-07-15 10:21:03.769544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:15.389 [2024-07-15 10:21:03.769606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.915 10:21:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.915 10:21:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:17.915 spdk_app_start Round 1 00:04:17.915 10:21:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1082399 /var/tmp/spdk-nbd.sock 00:04:17.915 10:21:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1082399 ']' 00:04:17.915 10:21:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.915 10:21:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.915 10:21:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.915 10:21:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.915 10:21:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.173 10:21:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.173 10:21:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:18.173 10:21:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.431 Malloc0 00:04:18.431 10:21:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.688 Malloc1 00:04:18.688 10:21:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.688 10:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.945 /dev/nbd0 00:04:18.945 10:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.945 10:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.945 1+0 records in 00:04:18.945 1+0 records out 00:04:18.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204955 s, 20.0 MB/s 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:18.945 10:21:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:18.945 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.945 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.945 10:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.203 /dev/nbd1 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.203 1+0 records in 00:04:19.203 1+0 records out 00:04:19.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198401 s, 20.6 MB/s 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:19.203 10:21:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.203 10:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.461 10:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.461 { 00:04:19.461 "nbd_device": "/dev/nbd0", 00:04:19.461 "bdev_name": "Malloc0" 00:04:19.461 }, 00:04:19.461 { 00:04:19.461 "nbd_device": "/dev/nbd1", 00:04:19.461 "bdev_name": "Malloc1" 00:04:19.461 } 00:04:19.461 ]' 00:04:19.461 10:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.461 { 00:04:19.461 "nbd_device": "/dev/nbd0", 00:04:19.461 "bdev_name": "Malloc0" 00:04:19.461 }, 00:04:19.461 { 00:04:19.461 "nbd_device": "/dev/nbd1", 00:04:19.461 "bdev_name": "Malloc1" 00:04:19.461 } 00:04:19.461 ]' 00:04:19.461 10:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.719 /dev/nbd1' 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.719 /dev/nbd1' 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.719 256+0 records in 00:04:19.719 256+0 records out 00:04:19.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532404 s, 197 MB/s 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.719 256+0 records in 00:04:19.719 256+0 records out 00:04:19.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021367 s, 49.1 MB/s 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.719 256+0 records in 00:04:19.719 256+0 records out 00:04:19.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227873 s, 46.0 MB/s 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.719 10:21:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.720 10:21:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.984 10:21:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.282 10:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.539 10:21:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.539 10:21:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.796 10:21:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.055 [2024-07-15 10:21:09.468269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.055 [2024-07-15 10:21:09.568552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.055 [2024-07-15 10:21:09.568556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.313 [2024-07-15 10:21:09.626765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.313 [2024-07-15 10:21:09.626877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.838 10:21:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.838 10:21:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.838 spdk_app_start Round 2 00:04:23.838 10:21:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1082399 /var/tmp/spdk-nbd.sock 00:04:23.838 10:21:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1082399 ']' 00:04:23.838 10:21:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.838 10:21:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.838 10:21:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.838 10:21:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.838 10:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.095 10:21:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.095 10:21:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:24.095 10:21:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.353 Malloc0 00:04:24.353 10:21:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.612 Malloc1 00:04:24.612 10:21:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.612 10:21:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.612 10:21:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.612 10:21:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.612 10:21:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.612 10:21:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.869 /dev/nbd0 00:04:24.869 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.869 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.869 1+0 records in 00:04:24.869 1+0 records out 00:04:24.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176541 s, 23.2 MB/s 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:24.869 10:21:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:24.869 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.869 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.869 10:21:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.127 /dev/nbd1 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.127 1+0 records in 00:04:25.127 1+0 records out 00:04:25.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182888 s, 22.4 MB/s 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.127 10:21:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.127 10:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.385 { 00:04:25.385 "nbd_device": "/dev/nbd0", 00:04:25.385 "bdev_name": "Malloc0" 00:04:25.385 }, 00:04:25.385 { 00:04:25.385 "nbd_device": "/dev/nbd1", 00:04:25.385 "bdev_name": "Malloc1" 00:04:25.385 } 00:04:25.385 ]' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.385 { 00:04:25.385 "nbd_device": "/dev/nbd0", 00:04:25.385 "bdev_name": "Malloc0" 00:04:25.385 }, 00:04:25.385 { 00:04:25.385 "nbd_device": "/dev/nbd1", 00:04:25.385 "bdev_name": "Malloc1" 00:04:25.385 } 00:04:25.385 ]' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.385 /dev/nbd1' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.385 /dev/nbd1' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.385 256+0 records in 00:04:25.385 256+0 records out 00:04:25.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518244 s, 202 MB/s 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.385 256+0 records in 00:04:25.385 256+0 records out 00:04:25.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021404 s, 49.0 MB/s 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.385 256+0 records in 00:04:25.385 256+0 records out 00:04:25.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230245 s, 45.5 MB/s 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.385 10:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.643 10:21:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.901 10:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.159 10:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.159 10:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.160 10:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.417 10:21:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.417 10:21:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.675 10:21:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.932 [2024-07-15 10:21:15.261394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.932 [2024-07-15 10:21:15.362050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.932 [2024-07-15 10:21:15.362050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.932 [2024-07-15 10:21:15.411676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.932 [2024-07-15 10:21:15.411741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.208 10:21:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1082399 /var/tmp/spdk-nbd.sock 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1082399 ']' 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:30.208 10:21:18 event.app_repeat -- event/event.sh@39 -- # killprocess 1082399 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1082399 ']' 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1082399 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1082399 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1082399' 00:04:30.208 killing process with pid 1082399 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1082399 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1082399 00:04:30.208 spdk_app_start is called in Round 0. 00:04:30.208 Shutdown signal received, stop current app iteration 00:04:30.208 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:04:30.208 spdk_app_start is called in Round 1. 00:04:30.208 Shutdown signal received, stop current app iteration 00:04:30.208 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:04:30.208 spdk_app_start is called in Round 2. 00:04:30.208 Shutdown signal received, stop current app iteration 00:04:30.208 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:04:30.208 spdk_app_start is called in Round 3. 00:04:30.208 Shutdown signal received, stop current app iteration 00:04:30.208 10:21:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:30.208 10:21:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:30.208 00:04:30.208 real 0m18.060s 00:04:30.208 user 0m39.285s 00:04:30.208 sys 0m3.203s 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.208 10:21:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.208 ************************************ 00:04:30.208 END TEST app_repeat 00:04:30.208 ************************************ 00:04:30.208 10:21:18 event -- common/autotest_common.sh@1142 -- # return 0 00:04:30.208 10:21:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:30.208 10:21:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:30.208 10:21:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.208 10:21:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.208 10:21:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.208 ************************************ 00:04:30.208 START TEST cpu_locks 00:04:30.208 ************************************ 00:04:30.208 10:21:18 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:30.208 * Looking for test storage... 00:04:30.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:30.208 10:21:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:30.208 10:21:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:30.208 10:21:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:30.209 10:21:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:30.209 10:21:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.209 10:21:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.209 10:21:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.209 ************************************ 00:04:30.209 START TEST default_locks 00:04:30.209 ************************************ 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1085344 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1085344 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1085344 ']' 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.209 10:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.209 [2024-07-15 10:21:18.708019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:30.209 [2024-07-15 10:21:18.708107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085344 ] 00:04:30.209 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.466 [2024-07-15 10:21:18.764985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.466 [2024-07-15 10:21:18.868172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.724 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.724 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:30.724 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1085344 00:04:30.724 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1085344 00:04:30.724 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.981 lslocks: write error 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1085344 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1085344 ']' 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1085344 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085344 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085344' 00:04:30.981 killing process with pid 1085344 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1085344 00:04:30.981 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1085344 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1085344 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1085344 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1085344 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1085344 ']' 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1085344) - No such process 00:04:31.547 ERROR: process (pid: 1085344) is no longer running 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.547 00:04:31.547 real 0m1.170s 00:04:31.547 user 0m1.108s 00:04:31.547 sys 0m0.490s 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.547 10:21:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.547 ************************************ 00:04:31.547 END TEST default_locks 00:04:31.547 ************************************ 00:04:31.547 10:21:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:31.547 10:21:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:31.547 10:21:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.547 10:21:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.547 10:21:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.547 ************************************ 00:04:31.547 START TEST default_locks_via_rpc 00:04:31.547 ************************************ 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1085506 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1085506 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1085506 ']' 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.547 10:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.547 [2024-07-15 10:21:19.929714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:31.547 [2024-07-15 10:21:19.929816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085506 ] 00:04:31.547 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.547 [2024-07-15 10:21:19.987037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.805 [2024-07-15 10:21:20.106710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.805 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1085506 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1085506 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1085506 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1085506 ']' 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1085506 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.062 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.319 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085506 00:04:32.319 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.319 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.319 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085506' 00:04:32.319 killing process with pid 1085506 00:04:32.319 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1085506 00:04:32.319 10:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1085506 00:04:32.576 00:04:32.576 real 0m1.198s 00:04:32.576 user 0m1.169s 00:04:32.576 sys 0m0.488s 00:04:32.576 10:21:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.576 10:21:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.576 ************************************ 00:04:32.576 END TEST default_locks_via_rpc 00:04:32.576 ************************************ 00:04:32.576 10:21:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:32.577 10:21:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:32.577 10:21:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.577 10:21:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.577 10:21:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.835 ************************************ 00:04:32.835 START TEST non_locking_app_on_locked_coremask 00:04:32.835 ************************************ 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1085675 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1085675 /var/tmp/spdk.sock 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1085675 ']' 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.835 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.835 [2024-07-15 10:21:21.180389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:32.835 [2024-07-15 10:21:21.180483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085675 ] 00:04:32.835 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.835 [2024-07-15 10:21:21.235685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.835 [2024-07-15 10:21:21.334293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1085682 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1085682 /var/tmp/spdk2.sock 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1085682 ']' 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.092 10:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.092 [2024-07-15 10:21:21.622176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:33.092 [2024-07-15 10:21:21.622280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085682 ] 00:04:33.350 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.350 [2024-07-15 10:21:21.715768] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.350 [2024-07-15 10:21:21.715827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.607 [2024-07-15 10:21:21.934094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.172 10:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.172 10:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:34.172 10:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1085675 00:04:34.172 10:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1085675 00:04:34.172 10:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.735 lslocks: write error 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1085675 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1085675 ']' 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1085675 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085675 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085675' 00:04:34.735 killing process with pid 1085675 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1085675 00:04:34.735 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1085675 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1085682 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1085682 ']' 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1085682 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085682 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085682' 00:04:35.664 killing process with pid 1085682 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1085682 00:04:35.664 10:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1085682 00:04:35.921 00:04:35.921 real 0m3.195s 00:04:35.921 user 0m3.365s 00:04:35.921 sys 0m1.001s 00:04:35.921 10:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.921 10:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.921 ************************************ 00:04:35.921 END TEST non_locking_app_on_locked_coremask 00:04:35.921 ************************************ 00:04:35.921 10:21:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:35.921 10:21:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:35.921 10:21:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.921 10:21:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.921 10:21:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.921 ************************************ 00:04:35.921 START TEST locking_app_on_unlocked_coremask 00:04:35.921 ************************************ 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1086108 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1086108 /var/tmp/spdk.sock 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1086108 ']' 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.921 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.921 [2024-07-15 10:21:24.418539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:35.921 [2024-07-15 10:21:24.418618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086108 ] 00:04:35.921 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.177 [2024-07-15 10:21:24.476176] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:36.177 [2024-07-15 10:21:24.476206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.177 [2024-07-15 10:21:24.576086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1086112 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1086112 /var/tmp/spdk2.sock 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1086112 ']' 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.433 10:21:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.433 [2024-07-15 10:21:24.870133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:36.433 [2024-07-15 10:21:24.870221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086112 ] 00:04:36.433 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.433 [2024-07-15 10:21:24.952420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.690 [2024-07-15 10:21:25.166749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.619 10:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.619 10:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:37.619 10:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1086112 00:04:37.619 10:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1086112 00:04:37.619 10:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.876 lslocks: write error 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1086108 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1086108 ']' 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1086108 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086108 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086108' 00:04:37.876 killing process with pid 1086108 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1086108 00:04:37.876 10:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1086108 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1086112 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1086112 ']' 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1086112 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086112 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086112' 00:04:38.810 killing process with pid 1086112 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1086112 00:04:38.810 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1086112 00:04:39.068 00:04:39.068 real 0m3.232s 00:04:39.068 user 0m3.417s 00:04:39.068 sys 0m1.020s 00:04:39.069 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.069 10:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.069 ************************************ 00:04:39.069 END TEST locking_app_on_unlocked_coremask 00:04:39.069 ************************************ 00:04:39.326 10:21:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:39.326 10:21:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:39.326 10:21:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.327 10:21:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.327 10:21:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.327 ************************************ 00:04:39.327 START TEST locking_app_on_locked_coremask 00:04:39.327 ************************************ 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1086538 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1086538 /var/tmp/spdk.sock 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1086538 ']' 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.327 10:21:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.327 [2024-07-15 10:21:27.703511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:39.327 [2024-07-15 10:21:27.703600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086538 ] 00:04:39.327 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.327 [2024-07-15 10:21:27.760513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.327 [2024-07-15 10:21:27.869350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1086552 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1086552 /var/tmp/spdk2.sock 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1086552 /var/tmp/spdk2.sock 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.584 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1086552 /var/tmp/spdk2.sock 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1086552 ']' 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.585 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 [2024-07-15 10:21:28.158422] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:39.842 [2024-07-15 10:21:28.158507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086552 ] 00:04:39.842 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.842 [2024-07-15 10:21:28.239377] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1086538 has claimed it. 00:04:39.842 [2024-07-15 10:21:28.239429] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:40.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1086552) - No such process 00:04:40.406 ERROR: process (pid: 1086552) is no longer running 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1086538 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1086538 00:04:40.406 10:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.662 lslocks: write error 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1086538 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1086538 ']' 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1086538 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086538 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086538' 00:04:40.662 killing process with pid 1086538 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1086538 00:04:40.662 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1086538 00:04:41.226 00:04:41.226 real 0m1.969s 00:04:41.226 user 0m2.138s 00:04:41.226 sys 0m0.612s 00:04:41.226 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.226 10:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.226 ************************************ 00:04:41.226 END TEST locking_app_on_locked_coremask 00:04:41.226 ************************************ 00:04:41.226 10:21:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:41.226 10:21:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:41.226 10:21:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.226 10:21:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.226 10:21:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.226 ************************************ 00:04:41.226 START TEST locking_overlapped_coremask 00:04:41.226 ************************************ 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1086833 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1086833 /var/tmp/spdk.sock 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1086833 ']' 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.226 10:21:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.226 [2024-07-15 10:21:29.724063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:41.226 [2024-07-15 10:21:29.724182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086833 ] 00:04:41.226 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.481 [2024-07-15 10:21:29.782105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:41.481 [2024-07-15 10:21:29.894457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.482 [2024-07-15 10:21:29.894522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.482 [2024-07-15 10:21:29.894525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1086852 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1086852 /var/tmp/spdk2.sock 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1086852 /var/tmp/spdk2.sock 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1086852 /var/tmp/spdk2.sock 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1086852 ']' 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.739 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.739 [2024-07-15 10:21:30.187401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:41.739 [2024-07-15 10:21:30.187485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086852 ] 00:04:41.739 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.739 [2024-07-15 10:21:30.273530] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1086833 has claimed it. 00:04:41.739 [2024-07-15 10:21:30.273580] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1086852) - No such process 00:04:42.672 ERROR: process (pid: 1086852) is no longer running 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1086833 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1086833 ']' 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1086833 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086833 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086833' 00:04:42.672 killing process with pid 1086833 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1086833 00:04:42.672 10:21:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1086833 00:04:42.929 00:04:42.929 real 0m1.653s 00:04:42.929 user 0m4.380s 00:04:42.929 sys 0m0.444s 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.929 ************************************ 00:04:42.929 END TEST locking_overlapped_coremask 00:04:42.929 ************************************ 00:04:42.929 10:21:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:42.929 10:21:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:42.929 10:21:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.929 10:21:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.929 10:21:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.929 ************************************ 00:04:42.929 START TEST locking_overlapped_coremask_via_rpc 00:04:42.929 ************************************ 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1087014 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1087014 /var/tmp/spdk.sock 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1087014 ']' 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.929 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.930 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.930 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.930 [2024-07-15 10:21:31.426874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:42.930 [2024-07-15 10:21:31.426964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087014 ] 00:04:42.930 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.187 [2024-07-15 10:21:31.483370] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.187 [2024-07-15 10:21:31.483399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.187 [2024-07-15 10:21:31.583291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.187 [2024-07-15 10:21:31.583353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.187 [2024-07-15 10:21:31.583357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1087134 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1087134 /var/tmp/spdk2.sock 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1087134 ']' 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.446 10:21:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.446 [2024-07-15 10:21:31.880889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:43.446 [2024-07-15 10:21:31.880978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087134 ] 00:04:43.446 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.446 [2024-07-15 10:21:31.970312] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.446 [2024-07-15 10:21:31.970352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.704 [2024-07-15 10:21:32.192977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.704 [2024-07-15 10:21:32.193039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:43.704 [2024-07-15 10:21:32.193042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.270 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.527 [2024-07-15 10:21:32.822922] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1087014 has claimed it. 00:04:44.527 request: 00:04:44.527 { 00:04:44.527 "method": "framework_enable_cpumask_locks", 00:04:44.527 "req_id": 1 00:04:44.527 } 00:04:44.527 Got JSON-RPC error response 00:04:44.527 response: 00:04:44.527 { 00:04:44.527 "code": -32603, 00:04:44.527 "message": "Failed to claim CPU core: 2" 00:04:44.527 } 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1087014 /var/tmp/spdk.sock 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1087014 ']' 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.527 10:21:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1087134 /var/tmp/spdk2.sock 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1087134 ']' 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:44.784 00:04:44.784 real 0m1.947s 00:04:44.784 user 0m0.998s 00:04:44.784 sys 0m0.183s 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.784 10:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.784 ************************************ 00:04:44.784 END TEST locking_overlapped_coremask_via_rpc 00:04:44.784 ************************************ 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:45.071 10:21:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:45.071 10:21:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1087014 ]] 00:04:45.071 10:21:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1087014 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1087014 ']' 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1087014 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1087014 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1087014' 00:04:45.071 killing process with pid 1087014 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1087014 00:04:45.071 10:21:33 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1087014 00:04:45.374 10:21:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1087134 ]] 00:04:45.374 10:21:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1087134 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1087134 ']' 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1087134 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1087134 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1087134' 00:04:45.374 killing process with pid 1087134 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1087134 00:04:45.374 10:21:33 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1087134 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1087014 ]] 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1087014 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1087014 ']' 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1087014 00:04:45.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1087014) - No such process 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1087014 is not found' 00:04:45.939 Process with pid 1087014 is not found 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1087134 ]] 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1087134 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1087134 ']' 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1087134 00:04:45.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1087134) - No such process 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1087134 is not found' 00:04:45.939 Process with pid 1087134 is not found 00:04:45.939 10:21:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.939 00:04:45.939 real 0m15.698s 00:04:45.939 user 0m27.307s 00:04:45.939 sys 0m5.124s 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.939 10:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.939 ************************************ 00:04:45.939 END TEST cpu_locks 00:04:45.939 ************************************ 00:04:45.939 10:21:34 event -- common/autotest_common.sh@1142 -- # return 0 00:04:45.939 00:04:45.939 real 0m39.734s 00:04:45.939 user 1m15.498s 00:04:45.940 sys 0m9.094s 00:04:45.940 10:21:34 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.940 10:21:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.940 ************************************ 00:04:45.940 END TEST event 00:04:45.940 ************************************ 00:04:45.940 10:21:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:45.940 10:21:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.940 10:21:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.940 10:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.940 10:21:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.940 ************************************ 00:04:45.940 START TEST thread 00:04:45.940 ************************************ 00:04:45.940 10:21:34 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.940 * Looking for test storage... 00:04:45.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:45.940 10:21:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.940 10:21:34 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:45.940 10:21:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.940 10:21:34 thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.940 ************************************ 00:04:45.940 START TEST thread_poller_perf 00:04:45.940 ************************************ 00:04:45.940 10:21:34 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.940 [2024-07-15 10:21:34.436894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:45.940 [2024-07-15 10:21:34.436961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087514 ] 00:04:45.940 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.196 [2024-07-15 10:21:34.499855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.196 [2024-07-15 10:21:34.611749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.196 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:47.565 ====================================== 00:04:47.565 busy:2710175457 (cyc) 00:04:47.565 total_run_count: 368000 00:04:47.565 tsc_hz: 2700000000 (cyc) 00:04:47.565 ====================================== 00:04:47.565 poller_cost: 7364 (cyc), 2727 (nsec) 00:04:47.565 00:04:47.565 real 0m1.301s 00:04:47.565 user 0m1.216s 00:04:47.565 sys 0m0.077s 00:04:47.565 10:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.565 10:21:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.565 ************************************ 00:04:47.565 END TEST thread_poller_perf 00:04:47.565 ************************************ 00:04:47.565 10:21:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:47.565 10:21:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.565 10:21:35 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:47.565 10:21:35 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.565 10:21:35 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.565 ************************************ 00:04:47.565 START TEST thread_poller_perf 00:04:47.565 ************************************ 00:04:47.565 10:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.565 [2024-07-15 10:21:35.789181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:47.565 [2024-07-15 10:21:35.789252] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087670 ] 00:04:47.565 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.565 [2024-07-15 10:21:35.850523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.565 [2024-07-15 10:21:35.952856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.565 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:48.934 ====================================== 00:04:48.934 busy:2702352354 (cyc) 00:04:48.934 total_run_count: 4855000 00:04:48.934 tsc_hz: 2700000000 (cyc) 00:04:48.934 ====================================== 00:04:48.934 poller_cost: 556 (cyc), 205 (nsec) 00:04:48.934 00:04:48.934 real 0m1.288s 00:04:48.934 user 0m1.204s 00:04:48.934 sys 0m0.079s 00:04:48.934 10:21:37 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.934 10:21:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.934 ************************************ 00:04:48.934 END TEST thread_poller_perf 00:04:48.934 ************************************ 00:04:48.934 10:21:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:48.934 10:21:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:48.934 00:04:48.934 real 0m2.747s 00:04:48.934 user 0m2.495s 00:04:48.934 sys 0m0.250s 00:04:48.934 10:21:37 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.934 10:21:37 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.934 ************************************ 00:04:48.934 END TEST thread 00:04:48.934 ************************************ 00:04:48.934 10:21:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.934 10:21:37 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.934 10:21:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.934 10:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.934 10:21:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.934 ************************************ 00:04:48.934 START TEST accel 00:04:48.934 ************************************ 00:04:48.934 10:21:37 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.934 * Looking for test storage... 00:04:48.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:48.934 10:21:37 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:48.934 10:21:37 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:48.934 10:21:37 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.934 10:21:37 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1087869 00:04:48.934 10:21:37 accel -- accel/accel.sh@63 -- # waitforlisten 1087869 00:04:48.934 10:21:37 accel -- common/autotest_common.sh@829 -- # '[' -z 1087869 ']' 00:04:48.934 10:21:37 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:48.934 10:21:37 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.934 10:21:37 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:48.934 10:21:37 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.934 10:21:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.935 10:21:37 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.935 10:21:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.935 10:21:37 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.935 10:21:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.935 10:21:37 accel -- common/autotest_common.sh@10 -- # set +x 00:04:48.935 10:21:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.935 10:21:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.935 10:21:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:48.935 10:21:37 accel -- accel/accel.sh@41 -- # jq -r . 00:04:48.935 [2024-07-15 10:21:37.243605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:48.935 [2024-07-15 10:21:37.243682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087869 ] 00:04:48.935 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.935 [2024-07-15 10:21:37.300962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.935 [2024-07-15 10:21:37.412599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@862 -- # return 0 00:04:49.192 10:21:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:49.192 10:21:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:49.192 10:21:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:49.192 10:21:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:49.192 10:21:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:49.192 10:21:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.192 10:21:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.192 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.192 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.192 10:21:37 accel -- accel/accel.sh@75 -- # killprocess 1087869 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@948 -- # '[' -z 1087869 ']' 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@952 -- # kill -0 1087869 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@953 -- # uname 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1087869 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1087869' 00:04:49.192 killing process with pid 1087869 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@967 -- # kill 1087869 00:04:49.192 10:21:37 accel -- common/autotest_common.sh@972 -- # wait 1087869 00:04:49.756 10:21:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:49.756 10:21:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 10:21:38 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:49.756 10:21:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:49.756 10:21:38 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.756 10:21:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:49.756 10:21:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.756 10:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 ************************************ 00:04:49.756 START TEST accel_missing_filename 00:04:49.756 ************************************ 00:04:49.756 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:49.756 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:49.757 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:49.757 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:49.757 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.757 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:49.757 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.757 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:49.757 10:21:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:49.757 [2024-07-15 10:21:38.264985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:49.757 [2024-07-15 10:21:38.265049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088037 ] 00:04:49.757 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.014 [2024-07-15 10:21:38.322616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.014 [2024-07-15 10:21:38.426932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.014 [2024-07-15 10:21:38.481256] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.014 [2024-07-15 10:21:38.555867] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:50.271 A filename is required. 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.271 00:04:50.271 real 0m0.423s 00:04:50.271 user 0m0.319s 00:04:50.271 sys 0m0.138s 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.271 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:50.271 ************************************ 00:04:50.271 END TEST accel_missing_filename 00:04:50.271 ************************************ 00:04:50.271 10:21:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.271 10:21:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.271 10:21:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:50.271 10:21:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.271 10:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.271 ************************************ 00:04:50.271 START TEST accel_compress_verify 00:04:50.271 ************************************ 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.271 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:50.271 10:21:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:50.272 [2024-07-15 10:21:38.733987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:50.272 [2024-07-15 10:21:38.734049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088173 ] 00:04:50.272 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.272 [2024-07-15 10:21:38.791179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.528 [2024-07-15 10:21:38.898938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.528 [2024-07-15 10:21:38.952217] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.528 [2024-07-15 10:21:39.033631] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:50.788 00:04:50.788 Compression does not support the verify option, aborting. 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.788 00:04:50.788 real 0m0.426s 00:04:50.788 user 0m0.323s 00:04:50.788 sys 0m0.137s 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.788 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:50.788 ************************************ 00:04:50.788 END TEST accel_compress_verify 00:04:50.788 ************************************ 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.788 10:21:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.788 ************************************ 00:04:50.788 START TEST accel_wrong_workload 00:04:50.788 ************************************ 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:50.788 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:50.788 Unsupported workload type: foobar 00:04:50.788 [2024-07-15 10:21:39.202352] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:50.788 accel_perf options: 00:04:50.788 [-h help message] 00:04:50.788 [-q queue depth per core] 00:04:50.788 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:50.788 [-T number of threads per core 00:04:50.788 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:50.788 [-t time in seconds] 00:04:50.788 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:50.788 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:50.788 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:50.788 [-l for compress/decompress workloads, name of uncompressed input file 00:04:50.788 [-S for crc32c workload, use this seed value (default 0) 00:04:50.788 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:50.788 [-f for fill workload, use this BYTE value (default 255) 00:04:50.788 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:50.788 [-y verify result if this switch is on] 00:04:50.788 [-a tasks to allocate per core (default: same value as -q)] 00:04:50.788 Can be used to spread operations across a wider range of memory. 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.788 00:04:50.788 real 0m0.024s 00:04:50.788 user 0m0.013s 00:04:50.788 sys 0m0.011s 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.788 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:50.788 ************************************ 00:04:50.788 END TEST accel_wrong_workload 00:04:50.788 ************************************ 00:04:50.788 Error: writing output failed: Broken pipe 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.788 10:21:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.788 ************************************ 00:04:50.788 START TEST accel_negative_buffers 00:04:50.788 ************************************ 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:50.788 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:50.788 -x option must be non-negative. 00:04:50.788 [2024-07-15 10:21:39.265982] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:50.788 accel_perf options: 00:04:50.788 [-h help message] 00:04:50.788 [-q queue depth per core] 00:04:50.788 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:50.788 [-T number of threads per core 00:04:50.788 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:50.788 [-t time in seconds] 00:04:50.788 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:50.788 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:50.788 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:50.788 [-l for compress/decompress workloads, name of uncompressed input file 00:04:50.788 [-S for crc32c workload, use this seed value (default 0) 00:04:50.788 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:50.788 [-f for fill workload, use this BYTE value (default 255) 00:04:50.788 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:50.788 [-y verify result if this switch is on] 00:04:50.788 [-a tasks to allocate per core (default: same value as -q)] 00:04:50.788 Can be used to spread operations across a wider range of memory. 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.788 00:04:50.788 real 0m0.020s 00:04:50.788 user 0m0.008s 00:04:50.788 sys 0m0.011s 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.788 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:50.788 ************************************ 00:04:50.788 END TEST accel_negative_buffers 00:04:50.788 ************************************ 00:04:50.788 Error: writing output failed: Broken pipe 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.788 10:21:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.788 10:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.788 ************************************ 00:04:50.788 START TEST accel_crc32c 00:04:50.788 ************************************ 00:04:50.788 10:21:39 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:50.789 10:21:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:50.789 [2024-07-15 10:21:39.335731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:50.789 [2024-07-15 10:21:39.335795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088250 ] 00:04:51.075 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.075 [2024-07-15 10:21:39.394140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.076 [2024-07-15 10:21:39.497705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.076 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:52.448 10:21:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.448 00:04:52.448 real 0m1.435s 00:04:52.448 user 0m1.303s 00:04:52.448 sys 0m0.135s 00:04:52.448 10:21:40 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.448 10:21:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:52.448 ************************************ 00:04:52.448 END TEST accel_crc32c 00:04:52.448 ************************************ 00:04:52.448 10:21:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.448 10:21:40 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:52.448 10:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:52.448 10:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.448 10:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.448 ************************************ 00:04:52.448 START TEST accel_crc32c_C2 00:04:52.448 ************************************ 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.448 10:21:40 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:52.448 [2024-07-15 10:21:40.815112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:52.448 [2024-07-15 10:21:40.815170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088448 ] 00:04:52.448 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.448 [2024-07-15 10:21:40.871661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.448 [2024-07-15 10:21:40.976136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.706 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.707 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.079 00:04:54.079 real 0m1.432s 00:04:54.079 user 0m1.302s 00:04:54.079 sys 0m0.131s 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.079 10:21:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:54.079 ************************************ 00:04:54.079 END TEST accel_crc32c_C2 00:04:54.079 ************************************ 00:04:54.079 10:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.079 10:21:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:54.079 10:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:54.079 10:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.079 10:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.079 ************************************ 00:04:54.079 START TEST accel_copy 00:04:54.079 ************************************ 00:04:54.079 10:21:42 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:54.079 [2024-07-15 10:21:42.297233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:54.079 [2024-07-15 10:21:42.297296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088677 ] 00:04:54.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.079 [2024-07-15 10:21:42.353635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.079 [2024-07-15 10:21:42.461536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:54.079 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.080 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:55.451 10:21:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:55.451 00:04:55.451 real 0m1.427s 00:04:55.451 user 0m1.302s 00:04:55.451 sys 0m0.126s 00:04:55.451 10:21:43 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.451 10:21:43 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:55.451 ************************************ 00:04:55.451 END TEST accel_copy 00:04:55.451 ************************************ 00:04:55.451 10:21:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:55.451 10:21:43 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.451 10:21:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:55.451 10:21:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.451 10:21:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.451 ************************************ 00:04:55.451 START TEST accel_fill 00:04:55.451 ************************************ 00:04:55.451 10:21:43 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:55.451 [2024-07-15 10:21:43.769417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:55.451 [2024-07-15 10:21:43.769479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088836 ] 00:04:55.451 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.451 [2024-07-15 10:21:43.826136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.451 [2024-07-15 10:21:43.929943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.451 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.452 10:21:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:56.832 10:21:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.832 00:04:56.832 real 0m1.429s 00:04:56.832 user 0m1.292s 00:04:56.832 sys 0m0.139s 00:04:56.832 10:21:45 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.832 10:21:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:56.832 ************************************ 00:04:56.832 END TEST accel_fill 00:04:56.832 ************************************ 00:04:56.832 10:21:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:56.832 10:21:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:56.832 10:21:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:56.832 10:21:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.832 10:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.832 ************************************ 00:04:56.832 START TEST accel_copy_crc32c 00:04:56.832 ************************************ 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:56.832 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.833 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.833 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.833 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.833 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.833 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:56.833 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:56.833 [2024-07-15 10:21:45.245113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:56.833 [2024-07-15 10:21:45.245173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088994 ] 00:04:56.833 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.833 [2024-07-15 10:21:45.302457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.091 [2024-07-15 10:21:45.409612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.091 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.092 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.466 00:04:58.466 real 0m1.427s 00:04:58.466 user 0m1.298s 00:04:58.466 sys 0m0.131s 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.466 10:21:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:58.466 ************************************ 00:04:58.466 END TEST accel_copy_crc32c 00:04:58.466 ************************************ 00:04:58.466 10:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:58.466 10:21:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:58.466 10:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:58.466 10:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.466 10:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.466 ************************************ 00:04:58.466 START TEST accel_copy_crc32c_C2 00:04:58.466 ************************************ 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:58.466 [2024-07-15 10:21:46.720541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:58.466 [2024-07-15 10:21:46.720604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089261 ] 00:04:58.466 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.466 [2024-07-15 10:21:46.776878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.466 [2024-07-15 10:21:46.880611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.466 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.467 10:21:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.839 00:04:59.839 real 0m1.419s 00:04:59.839 user 0m1.286s 00:04:59.839 sys 0m0.135s 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.839 10:21:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:59.839 ************************************ 00:04:59.839 END TEST accel_copy_crc32c_C2 00:04:59.839 ************************************ 00:04:59.839 10:21:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:59.839 10:21:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:59.839 10:21:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:59.839 10:21:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.839 10:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.839 ************************************ 00:04:59.839 START TEST accel_dualcast 00:04:59.839 ************************************ 00:04:59.839 10:21:48 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:04:59.839 10:21:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:04:59.839 [2024-07-15 10:21:48.186203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:59.839 [2024-07-15 10:21:48.186264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089423 ] 00:04:59.839 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.839 [2024-07-15 10:21:48.242709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.839 [2024-07-15 10:21:48.348814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.098 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:01.471 10:21:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.471 00:05:01.471 real 0m1.434s 00:05:01.471 user 0m1.296s 00:05:01.471 sys 0m0.139s 00:05:01.471 10:21:49 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.471 10:21:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:01.471 ************************************ 00:05:01.471 END TEST accel_dualcast 00:05:01.471 ************************************ 00:05:01.471 10:21:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:01.471 10:21:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:01.471 10:21:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:01.471 10:21:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.471 10:21:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.471 ************************************ 00:05:01.471 START TEST accel_compare 00:05:01.471 ************************************ 00:05:01.471 10:21:49 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:01.471 10:21:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:01.471 [2024-07-15 10:21:49.673203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:01.472 [2024-07-15 10:21:49.673266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089581 ] 00:05:01.472 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.472 [2024-07-15 10:21:49.730708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.472 [2024-07-15 10:21:49.832616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.472 10:21:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.845 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:02.846 10:21:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.846 00:05:02.846 real 0m1.428s 00:05:02.846 user 0m1.299s 00:05:02.846 sys 0m0.130s 00:05:02.846 10:21:51 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.846 10:21:51 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:02.846 ************************************ 00:05:02.846 END TEST accel_compare 00:05:02.846 ************************************ 00:05:02.846 10:21:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:02.846 10:21:51 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:02.846 10:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:02.846 10:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.846 10:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.846 ************************************ 00:05:02.846 START TEST accel_xor 00:05:02.846 ************************************ 00:05:02.846 10:21:51 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:02.846 [2024-07-15 10:21:51.147709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:02.846 [2024-07-15 10:21:51.147772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089849 ] 00:05:02.846 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.846 [2024-07-15 10:21:51.205048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.846 [2024-07-15 10:21:51.310068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.846 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.221 00:05:04.221 real 0m1.431s 00:05:04.221 user 0m1.300s 00:05:04.221 sys 0m0.132s 00:05:04.221 10:21:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.221 10:21:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:04.221 ************************************ 00:05:04.221 END TEST accel_xor 00:05:04.221 ************************************ 00:05:04.221 10:21:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.221 10:21:52 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:04.221 10:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:04.221 10:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.221 10:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.221 ************************************ 00:05:04.221 START TEST accel_xor 00:05:04.221 ************************************ 00:05:04.221 10:21:52 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:04.221 10:21:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:04.222 10:21:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:04.222 [2024-07-15 10:21:52.629062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:04.222 [2024-07-15 10:21:52.629135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090009 ] 00:05:04.222 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.222 [2024-07-15 10:21:52.701227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.479 [2024-07-15 10:21:52.841017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.479 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.480 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:05.851 10:21:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.851 00:05:05.851 real 0m1.474s 00:05:05.851 user 0m1.326s 00:05:05.851 sys 0m0.150s 00:05:05.852 10:21:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.852 10:21:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:05.852 ************************************ 00:05:05.852 END TEST accel_xor 00:05:05.852 ************************************ 00:05:05.852 10:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:05.852 10:21:54 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:05.852 10:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:05.852 10:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.852 10:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.852 ************************************ 00:05:05.852 START TEST accel_dif_verify 00:05:05.852 ************************************ 00:05:05.852 10:21:54 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:05.852 [2024-07-15 10:21:54.151191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:05.852 [2024-07-15 10:21:54.151256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090170 ] 00:05:05.852 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.852 [2024-07-15 10:21:54.208545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.852 [2024-07-15 10:21:54.312237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:05.852 10:21:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:07.227 10:21:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.227 00:05:07.227 real 0m1.430s 00:05:07.227 user 0m1.299s 00:05:07.227 sys 0m0.135s 00:05:07.227 10:21:55 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.227 10:21:55 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:07.227 ************************************ 00:05:07.227 END TEST accel_dif_verify 00:05:07.227 ************************************ 00:05:07.227 10:21:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.227 10:21:55 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:07.227 10:21:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:07.227 10:21:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.227 10:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.227 ************************************ 00:05:07.227 START TEST accel_dif_generate 00:05:07.227 ************************************ 00:05:07.227 10:21:55 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:07.227 10:21:55 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:07.227 [2024-07-15 10:21:55.627982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:07.227 [2024-07-15 10:21:55.628043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090382 ] 00:05:07.227 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.227 [2024-07-15 10:21:55.684030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.486 [2024-07-15 10:21:55.791733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.486 10:21:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:08.859 10:21:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.859 00:05:08.859 real 0m1.435s 00:05:08.859 user 0m1.312s 00:05:08.859 sys 0m0.126s 00:05:08.859 10:21:57 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.859 10:21:57 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:08.859 ************************************ 00:05:08.859 END TEST accel_dif_generate 00:05:08.859 ************************************ 00:05:08.859 10:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:08.859 10:21:57 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:08.859 10:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:08.859 10:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.859 10:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.859 ************************************ 00:05:08.859 START TEST accel_dif_generate_copy 00:05:08.859 ************************************ 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:08.859 [2024-07-15 10:21:57.113315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:08.859 [2024-07-15 10:21:57.113379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090595 ] 00:05:08.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.859 [2024-07-15 10:21:57.172714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.859 [2024-07-15 10:21:57.275543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.859 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.860 10:21:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.233 00:05:10.233 real 0m1.419s 00:05:10.233 user 0m1.286s 00:05:10.233 sys 0m0.134s 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.233 10:21:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:10.233 ************************************ 00:05:10.233 END TEST accel_dif_generate_copy 00:05:10.233 ************************************ 00:05:10.233 10:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.233 10:21:58 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:10.233 10:21:58 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:10.233 10:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:10.233 10:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.233 10:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.233 ************************************ 00:05:10.233 START TEST accel_comp 00:05:10.233 ************************************ 00:05:10.233 10:21:58 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.233 10:21:58 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:10.234 10:21:58 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:10.234 [2024-07-15 10:21:58.581066] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:10.234 [2024-07-15 10:21:58.581135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090761 ] 00:05:10.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.234 [2024-07-15 10:21:58.638757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.234 [2024-07-15 10:21:58.743301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:10.491 10:21:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:21:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.861 10:22:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:11.861 10:22:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.861 00:05:11.861 real 0m1.440s 00:05:11.861 user 0m1.308s 00:05:11.861 sys 0m0.135s 00:05:11.861 10:22:00 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.861 10:22:00 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:11.861 ************************************ 00:05:11.861 END TEST accel_comp 00:05:11.861 ************************************ 00:05:11.861 10:22:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.861 10:22:00 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.861 10:22:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:11.861 10:22:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.861 10:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.861 ************************************ 00:05:11.861 START TEST accel_decomp 00:05:11.861 ************************************ 00:05:11.861 10:22:00 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:11.861 [2024-07-15 10:22:00.070592] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:11.861 [2024-07-15 10:22:00.070656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090915 ] 00:05:11.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.861 [2024-07-15 10:22:00.129573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.861 [2024-07-15 10:22:00.235940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:11.861 10:22:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.307 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:13.308 10:22:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.308 00:05:13.308 real 0m1.436s 00:05:13.308 user 0m1.301s 00:05:13.308 sys 0m0.137s 00:05:13.308 10:22:01 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.308 10:22:01 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:13.308 ************************************ 00:05:13.308 END TEST accel_decomp 00:05:13.308 ************************************ 00:05:13.308 10:22:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:13.308 10:22:01 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:13.308 10:22:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:13.308 10:22:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.308 10:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.308 ************************************ 00:05:13.308 START TEST accel_decomp_full 00:05:13.308 ************************************ 00:05:13.308 10:22:01 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:13.308 [2024-07-15 10:22:01.559252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:13.308 [2024-07-15 10:22:01.559319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091190 ] 00:05:13.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.308 [2024-07-15 10:22:01.617416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.308 [2024-07-15 10:22:01.720520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:13.308 10:22:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:14.679 10:22:02 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.679 00:05:14.679 real 0m1.441s 00:05:14.679 user 0m1.310s 00:05:14.679 sys 0m0.133s 00:05:14.679 10:22:02 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.679 10:22:02 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:14.679 ************************************ 00:05:14.679 END TEST accel_decomp_full 00:05:14.679 ************************************ 00:05:14.679 10:22:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.679 10:22:03 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.679 10:22:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:14.679 10:22:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.679 10:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.679 ************************************ 00:05:14.679 START TEST accel_decomp_mcore 00:05:14.679 ************************************ 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:14.679 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:14.679 [2024-07-15 10:22:03.048177] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:14.679 [2024-07-15 10:22:03.048240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091351 ] 00:05:14.679 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.679 [2024-07-15 10:22:03.117265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.937 [2024-07-15 10:22:03.243160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.937 [2024-07-15 10:22:03.243226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.937 [2024-07-15 10:22:03.243290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.937 [2024-07-15 10:22:03.243293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:14.937 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:14.938 10:22:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.312 00:05:16.312 real 0m1.479s 00:05:16.312 user 0m4.766s 00:05:16.312 sys 0m0.153s 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.312 10:22:04 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:16.312 ************************************ 00:05:16.312 END TEST accel_decomp_mcore 00:05:16.312 ************************************ 00:05:16.312 10:22:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:16.312 10:22:04 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.312 10:22:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:16.312 10:22:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.312 10:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.312 ************************************ 00:05:16.312 START TEST accel_decomp_full_mcore 00:05:16.312 ************************************ 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:16.312 [2024-07-15 10:22:04.573942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:16.312 [2024-07-15 10:22:04.574004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091514 ] 00:05:16.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.312 [2024-07-15 10:22:04.630454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.312 [2024-07-15 10:22:04.741751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.312 [2024-07-15 10:22:04.741816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.312 [2024-07-15 10:22:04.741872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.312 [2024-07-15 10:22:04.741875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.312 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.313 10:22:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.687 00:05:17.687 real 0m1.458s 00:05:17.687 user 0m4.786s 00:05:17.687 sys 0m0.132s 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.687 10:22:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:17.687 ************************************ 00:05:17.687 END TEST accel_decomp_full_mcore 00:05:17.687 ************************************ 00:05:17.687 10:22:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.687 10:22:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.687 10:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:17.687 10:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.687 10:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.687 ************************************ 00:05:17.687 START TEST accel_decomp_mthread 00:05:17.687 ************************************ 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:17.687 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:17.687 [2024-07-15 10:22:06.077767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:17.687 [2024-07-15 10:22:06.077837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091785 ] 00:05:17.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.687 [2024-07-15 10:22:06.133974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.945 [2024-07-15 10:22:06.239716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:17.945 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.946 10:22:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.320 00:05:19.320 real 0m1.437s 00:05:19.320 user 0m1.297s 00:05:19.320 sys 0m0.141s 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.320 10:22:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:19.320 ************************************ 00:05:19.320 END TEST accel_decomp_mthread 00:05:19.320 ************************************ 00:05:19.320 10:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.320 10:22:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:19.320 10:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:19.320 10:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.320 10:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.320 ************************************ 00:05:19.320 START TEST accel_decomp_full_mthread 00:05:19.320 ************************************ 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:19.320 [2024-07-15 10:22:07.562418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:19.320 [2024-07-15 10:22:07.562480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091946 ] 00:05:19.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.320 [2024-07-15 10:22:07.618565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.320 [2024-07-15 10:22:07.721399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:19.320 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.321 10:22:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.693 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.694 00:05:20.694 real 0m1.497s 00:05:20.694 user 0m1.375s 00:05:20.694 sys 0m0.125s 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.694 10:22:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:20.694 ************************************ 00:05:20.694 END TEST accel_decomp_full_mthread 00:05:20.694 ************************************ 00:05:20.694 10:22:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.694 10:22:09 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:20.694 10:22:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:20.694 10:22:09 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:20.694 10:22:09 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:20.694 10:22:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.694 10:22:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.694 10:22:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.694 10:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.694 10:22:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.694 10:22:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.694 10:22:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.694 10:22:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:20.694 10:22:09 accel -- accel/accel.sh@41 -- # jq -r . 00:05:20.694 ************************************ 00:05:20.694 START TEST accel_dif_functional_tests 00:05:20.694 ************************************ 00:05:20.694 10:22:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:20.694 [2024-07-15 10:22:09.128984] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:20.694 [2024-07-15 10:22:09.129046] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092106 ] 00:05:20.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.694 [2024-07-15 10:22:09.189185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.952 [2024-07-15 10:22:09.298299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.952 [2024-07-15 10:22:09.298360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.952 [2024-07-15 10:22:09.298364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.952 00:05:20.952 00:05:20.952 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.952 http://cunit.sourceforge.net/ 00:05:20.952 00:05:20.952 00:05:20.952 Suite: accel_dif 00:05:20.952 Test: verify: DIF generated, GUARD check ...passed 00:05:20.952 Test: verify: DIF generated, APPTAG check ...passed 00:05:20.952 Test: verify: DIF generated, REFTAG check ...passed 00:05:20.952 Test: verify: DIF not generated, GUARD check ...[2024-07-15 10:22:09.394233] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:20.952 passed 00:05:20.952 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 10:22:09.394314] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:20.952 passed 00:05:20.952 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 10:22:09.394347] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:20.952 passed 00:05:20.952 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:20.952 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 10:22:09.394410] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:20.952 passed 00:05:20.952 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:20.952 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:20.952 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:20.952 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 10:22:09.394549] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:20.952 passed 00:05:20.952 Test: verify copy: DIF generated, GUARD check ...passed 00:05:20.952 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:20.952 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:20.952 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 10:22:09.394699] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:20.952 passed 00:05:20.952 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 10:22:09.394735] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:20.952 passed 00:05:20.952 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 10:22:09.394768] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:20.952 passed 00:05:20.952 Test: generate copy: DIF generated, GUARD check ...passed 00:05:20.952 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:20.952 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:20.952 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:20.952 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:20.952 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:20.952 Test: generate copy: iovecs-len validate ...[2024-07-15 10:22:09.395024] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:20.952 passed 00:05:20.952 Test: generate copy: buffer alignment validate ...passed 00:05:20.952 00:05:20.952 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.952 suites 1 1 n/a 0 0 00:05:20.952 tests 26 26 26 0 0 00:05:20.952 asserts 115 115 115 0 n/a 00:05:20.952 00:05:20.952 Elapsed time = 0.003 seconds 00:05:21.211 00:05:21.211 real 0m0.548s 00:05:21.211 user 0m0.832s 00:05:21.211 sys 0m0.183s 00:05:21.211 10:22:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.211 10:22:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:21.211 ************************************ 00:05:21.211 END TEST accel_dif_functional_tests 00:05:21.211 ************************************ 00:05:21.211 10:22:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.211 00:05:21.211 real 0m32.525s 00:05:21.211 user 0m36.093s 00:05:21.211 sys 0m4.385s 00:05:21.211 10:22:09 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.211 10:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.211 ************************************ 00:05:21.211 END TEST accel 00:05:21.211 ************************************ 00:05:21.211 10:22:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.211 10:22:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:21.211 10:22:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.211 10:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.211 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:05:21.211 ************************************ 00:05:21.211 START TEST accel_rpc 00:05:21.211 ************************************ 00:05:21.211 10:22:09 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:21.211 * Looking for test storage... 00:05:21.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:21.469 10:22:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.469 10:22:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1092288 00:05:21.469 10:22:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:21.469 10:22:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1092288 00:05:21.469 10:22:09 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1092288 ']' 00:05:21.469 10:22:09 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.469 10:22:09 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.469 10:22:09 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.469 10:22:09 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.469 10:22:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.469 [2024-07-15 10:22:09.818199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:21.469 [2024-07-15 10:22:09.818293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092288 ] 00:05:21.469 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.469 [2024-07-15 10:22:09.874996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.469 [2024-07-15 10:22:09.981197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.727 10:22:10 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.727 10:22:10 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:21.727 10:22:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:21.727 10:22:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:21.727 10:22:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:21.727 10:22:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:21.727 10:22:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:21.727 10:22:10 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.727 10:22:10 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.727 10:22:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.727 ************************************ 00:05:21.727 START TEST accel_assign_opcode 00:05:21.727 ************************************ 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:21.727 [2024-07-15 10:22:10.053818] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:21.727 [2024-07-15 10:22:10.061833] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.727 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.985 software 00:05:21.985 00:05:21.985 real 0m0.312s 00:05:21.985 user 0m0.038s 00:05:21.985 sys 0m0.006s 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.985 10:22:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:21.985 ************************************ 00:05:21.985 END TEST accel_assign_opcode 00:05:21.985 ************************************ 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.985 10:22:10 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1092288 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1092288 ']' 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1092288 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1092288 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1092288' 00:05:21.985 killing process with pid 1092288 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@967 -- # kill 1092288 00:05:21.985 10:22:10 accel_rpc -- common/autotest_common.sh@972 -- # wait 1092288 00:05:22.551 00:05:22.551 real 0m1.132s 00:05:22.551 user 0m1.067s 00:05:22.551 sys 0m0.415s 00:05:22.551 10:22:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.551 10:22:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 ************************************ 00:05:22.551 END TEST accel_rpc 00:05:22.551 ************************************ 00:05:22.551 10:22:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.551 10:22:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.551 10:22:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.551 10:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.551 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 ************************************ 00:05:22.551 START TEST app_cmdline 00:05:22.551 ************************************ 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.551 * Looking for test storage... 00:05:22.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:22.551 10:22:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:22.551 10:22:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1092494 00:05:22.551 10:22:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:22.551 10:22:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1092494 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1092494 ']' 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.551 10:22:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 [2024-07-15 10:22:10.996620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.551 [2024-07-15 10:22:10.996699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092494 ] 00:05:22.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.551 [2024-07-15 10:22:11.055494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.810 [2024-07-15 10:22:11.169999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.068 10:22:11 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.068 10:22:11 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:23.068 10:22:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:23.326 { 00:05:23.326 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:05:23.326 "fields": { 00:05:23.326 "major": 24, 00:05:23.326 "minor": 9, 00:05:23.326 "patch": 0, 00:05:23.326 "suffix": "-pre", 00:05:23.326 "commit": "719d03c6a" 00:05:23.326 } 00:05:23.326 } 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:23.326 10:22:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:23.326 10:22:11 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.584 request: 00:05:23.584 { 00:05:23.584 "method": "env_dpdk_get_mem_stats", 00:05:23.584 "req_id": 1 00:05:23.584 } 00:05:23.584 Got JSON-RPC error response 00:05:23.584 response: 00:05:23.584 { 00:05:23.584 "code": -32601, 00:05:23.584 "message": "Method not found" 00:05:23.584 } 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.584 10:22:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1092494 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1092494 ']' 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1092494 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1092494 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1092494' 00:05:23.584 killing process with pid 1092494 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@967 -- # kill 1092494 00:05:23.584 10:22:11 app_cmdline -- common/autotest_common.sh@972 -- # wait 1092494 00:05:23.842 00:05:23.842 real 0m1.488s 00:05:23.842 user 0m1.823s 00:05:23.842 sys 0m0.439s 00:05:23.842 10:22:12 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.842 10:22:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.842 ************************************ 00:05:23.842 END TEST app_cmdline 00:05:23.842 ************************************ 00:05:24.100 10:22:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.100 10:22:12 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:24.100 10:22:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.100 10:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.100 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:24.100 ************************************ 00:05:24.100 START TEST version 00:05:24.100 ************************************ 00:05:24.100 10:22:12 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:24.100 * Looking for test storage... 00:05:24.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:24.100 10:22:12 version -- app/version.sh@17 -- # get_header_version major 00:05:24.100 10:22:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # cut -f2 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:24.100 10:22:12 version -- app/version.sh@17 -- # major=24 00:05:24.100 10:22:12 version -- app/version.sh@18 -- # get_header_version minor 00:05:24.100 10:22:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # cut -f2 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:24.100 10:22:12 version -- app/version.sh@18 -- # minor=9 00:05:24.100 10:22:12 version -- app/version.sh@19 -- # get_header_version patch 00:05:24.100 10:22:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # cut -f2 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:24.100 10:22:12 version -- app/version.sh@19 -- # patch=0 00:05:24.100 10:22:12 version -- app/version.sh@20 -- # get_header_version suffix 00:05:24.100 10:22:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # cut -f2 00:05:24.100 10:22:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:24.100 10:22:12 version -- app/version.sh@20 -- # suffix=-pre 00:05:24.100 10:22:12 version -- app/version.sh@22 -- # version=24.9 00:05:24.100 10:22:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:24.100 10:22:12 version -- app/version.sh@28 -- # version=24.9rc0 00:05:24.100 10:22:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:24.100 10:22:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:24.100 10:22:12 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:24.100 10:22:12 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:24.100 00:05:24.100 real 0m0.105s 00:05:24.100 user 0m0.051s 00:05:24.100 sys 0m0.076s 00:05:24.100 10:22:12 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.100 10:22:12 version -- common/autotest_common.sh@10 -- # set +x 00:05:24.100 ************************************ 00:05:24.100 END TEST version 00:05:24.100 ************************************ 00:05:24.100 10:22:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.100 10:22:12 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@198 -- # uname -s 00:05:24.100 10:22:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:24.100 10:22:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:24.100 10:22:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:24.100 10:22:12 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:24.100 10:22:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.100 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:24.100 10:22:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:24.100 10:22:12 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:24.100 10:22:12 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:24.101 10:22:12 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.101 10:22:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:24.101 10:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.101 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:24.101 ************************************ 00:05:24.101 START TEST nvmf_tcp 00:05:24.101 ************************************ 00:05:24.101 10:22:12 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.101 * Looking for test storage... 00:05:24.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.101 10:22:12 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.360 10:22:12 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.360 10:22:12 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.360 10:22:12 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.360 10:22:12 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:24.360 10:22:12 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:24.360 10:22:12 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.360 10:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:24.360 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:24.360 10:22:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:24.360 10:22:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.360 10:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.360 ************************************ 00:05:24.360 START TEST nvmf_example 00:05:24.360 ************************************ 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:24.360 * Looking for test storage... 00:05:24.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.360 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:24.361 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:24.361 10:22:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:24.361 10:22:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:26.260 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:26.260 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:26.260 Found net devices under 0000:09:00.0: cvl_0_0 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:26.260 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:26.261 Found net devices under 0000:09:00.1: cvl_0_1 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.261 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:26.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:26.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:05:26.519 00:05:26.519 --- 10.0.0.2 ping statistics --- 00:05:26.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.519 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:26.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:26.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:05:26.519 00:05:26.519 --- 10.0.0.1 ping statistics --- 00:05:26.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.519 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1094516 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1094516 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1094516 ']' 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.519 10:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:26.519 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:27.452 10:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:27.709 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.690 Initializing NVMe Controllers 00:05:37.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:37.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:37.690 Initialization complete. Launching workers. 00:05:37.690 ======================================================== 00:05:37.690 Latency(us) 00:05:37.690 Device Information : IOPS MiB/s Average min max 00:05:37.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14279.27 55.78 4481.56 932.04 15869.13 00:05:37.690 ======================================================== 00:05:37.690 Total : 14279.27 55.78 4481.56 932.04 15869.13 00:05:37.690 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:37.690 rmmod nvme_tcp 00:05:37.690 rmmod nvme_fabrics 00:05:37.690 rmmod nvme_keyring 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1094516 ']' 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1094516 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1094516 ']' 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1094516 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:37.690 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1094516 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1094516' 00:05:37.967 killing process with pid 1094516 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1094516 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1094516 00:05:37.967 nvmf threads initialize successfully 00:05:37.967 bdev subsystem init successfully 00:05:37.967 created a nvmf target service 00:05:37.967 create targets's poll groups done 00:05:37.967 all subsystems of target started 00:05:37.967 nvmf target is running 00:05:37.967 all subsystems of target stopped 00:05:37.967 destroy targets's poll groups done 00:05:37.967 destroyed the nvmf target service 00:05:37.967 bdev subsystem finish successfully 00:05:37.967 nvmf threads destroy successfully 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:37.967 10:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.511 10:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:40.511 10:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:40.511 10:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.511 10:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.511 00:05:40.511 real 0m15.863s 00:05:40.511 user 0m43.072s 00:05:40.511 sys 0m4.113s 00:05:40.511 10:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.511 10:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.511 ************************************ 00:05:40.511 END TEST nvmf_example 00:05:40.511 ************************************ 00:05:40.511 10:22:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:40.511 10:22:28 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:40.511 10:22:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:40.511 10:22:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.511 10:22:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.511 ************************************ 00:05:40.511 START TEST nvmf_filesystem 00:05:40.511 ************************************ 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:40.511 * Looking for test storage... 00:05:40.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:40.511 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:40.512 #define SPDK_CONFIG_H 00:05:40.512 #define SPDK_CONFIG_APPS 1 00:05:40.512 #define SPDK_CONFIG_ARCH native 00:05:40.512 #undef SPDK_CONFIG_ASAN 00:05:40.512 #undef SPDK_CONFIG_AVAHI 00:05:40.512 #undef SPDK_CONFIG_CET 00:05:40.512 #define SPDK_CONFIG_COVERAGE 1 00:05:40.512 #define SPDK_CONFIG_CROSS_PREFIX 00:05:40.512 #undef SPDK_CONFIG_CRYPTO 00:05:40.512 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:40.512 #undef SPDK_CONFIG_CUSTOMOCF 00:05:40.512 #undef SPDK_CONFIG_DAOS 00:05:40.512 #define SPDK_CONFIG_DAOS_DIR 00:05:40.512 #define SPDK_CONFIG_DEBUG 1 00:05:40.512 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:40.512 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:40.512 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:40.512 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:40.512 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:40.512 #undef SPDK_CONFIG_DPDK_UADK 00:05:40.512 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:40.512 #define SPDK_CONFIG_EXAMPLES 1 00:05:40.512 #undef SPDK_CONFIG_FC 00:05:40.512 #define SPDK_CONFIG_FC_PATH 00:05:40.512 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:40.512 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:40.512 #undef SPDK_CONFIG_FUSE 00:05:40.512 #undef SPDK_CONFIG_FUZZER 00:05:40.512 #define SPDK_CONFIG_FUZZER_LIB 00:05:40.512 #undef SPDK_CONFIG_GOLANG 00:05:40.512 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:40.512 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:40.512 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:40.512 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:40.512 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:40.512 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:40.512 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:40.512 #define SPDK_CONFIG_IDXD 1 00:05:40.512 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:40.512 #undef SPDK_CONFIG_IPSEC_MB 00:05:40.512 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:40.512 #define SPDK_CONFIG_ISAL 1 00:05:40.512 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:40.512 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:40.512 #define SPDK_CONFIG_LIBDIR 00:05:40.512 #undef SPDK_CONFIG_LTO 00:05:40.512 #define SPDK_CONFIG_MAX_LCORES 128 00:05:40.512 #define SPDK_CONFIG_NVME_CUSE 1 00:05:40.512 #undef SPDK_CONFIG_OCF 00:05:40.512 #define SPDK_CONFIG_OCF_PATH 00:05:40.512 #define SPDK_CONFIG_OPENSSL_PATH 00:05:40.512 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:40.512 #define SPDK_CONFIG_PGO_DIR 00:05:40.512 #undef SPDK_CONFIG_PGO_USE 00:05:40.512 #define SPDK_CONFIG_PREFIX /usr/local 00:05:40.512 #undef SPDK_CONFIG_RAID5F 00:05:40.512 #undef SPDK_CONFIG_RBD 00:05:40.512 #define SPDK_CONFIG_RDMA 1 00:05:40.512 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:40.512 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:40.512 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:40.512 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:40.512 #define SPDK_CONFIG_SHARED 1 00:05:40.512 #undef SPDK_CONFIG_SMA 00:05:40.512 #define SPDK_CONFIG_TESTS 1 00:05:40.512 #undef SPDK_CONFIG_TSAN 00:05:40.512 #define SPDK_CONFIG_UBLK 1 00:05:40.512 #define SPDK_CONFIG_UBSAN 1 00:05:40.512 #undef SPDK_CONFIG_UNIT_TESTS 00:05:40.512 #undef SPDK_CONFIG_URING 00:05:40.512 #define SPDK_CONFIG_URING_PATH 00:05:40.512 #undef SPDK_CONFIG_URING_ZNS 00:05:40.512 #undef SPDK_CONFIG_USDT 00:05:40.512 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:40.512 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:40.512 #define SPDK_CONFIG_VFIO_USER 1 00:05:40.512 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:40.512 #define SPDK_CONFIG_VHOST 1 00:05:40.512 #define SPDK_CONFIG_VIRTIO 1 00:05:40.512 #undef SPDK_CONFIG_VTUNE 00:05:40.512 #define SPDK_CONFIG_VTUNE_DIR 00:05:40.512 #define SPDK_CONFIG_WERROR 1 00:05:40.512 #define SPDK_CONFIG_WPDK_DIR 00:05:40.512 #undef SPDK_CONFIG_XNVME 00:05:40.512 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.512 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:40.513 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1096214 ]] 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1096214 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.cm1Cvn 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cm1Cvn/tests/target /tmp/spdk.cm1Cvn 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:40.514 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56432025600 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994713088 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5562687488 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30993981440 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996893696 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=462848 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:40.515 * Looking for test storage... 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56432025600 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7777280000 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.515 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:40.516 10:22:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:42.419 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:42.420 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:42.420 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:42.420 Found net devices under 0000:09:00.0: cvl_0_0 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:42.420 Found net devices under 0000:09:00.1: cvl_0_1 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:42.420 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:42.678 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:42.678 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:42.678 10:22:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:42.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:42.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:05:42.678 00:05:42.678 --- 10.0.0.2 ping statistics --- 00:05:42.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.678 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:42.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:42.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:05:42.678 00:05:42.678 --- 10.0.0.1 ping statistics --- 00:05:42.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.678 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:42.678 ************************************ 00:05:42.678 START TEST nvmf_filesystem_no_in_capsule 00:05:42.678 ************************************ 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:05:42.678 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1097845 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1097845 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1097845 ']' 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.679 10:22:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 [2024-07-15 10:22:31.114705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:42.679 [2024-07-15 10:22:31.114797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.679 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.679 [2024-07-15 10:22:31.181939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.937 [2024-07-15 10:22:31.294130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:42.937 [2024-07-15 10:22:31.294193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:42.937 [2024-07-15 10:22:31.294220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.937 [2024-07-15 10:22:31.294231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.937 [2024-07-15 10:22:31.294240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:42.937 [2024-07-15 10:22:31.294368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.937 [2024-07-15 10:22:31.294435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.937 [2024-07-15 10:22:31.294457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.937 [2024-07-15 10:22:31.294460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.870 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.870 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 [2024-07-15 10:22:32.123992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 [2024-07-15 10:22:32.308225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:43.871 { 00:05:43.871 "name": "Malloc1", 00:05:43.871 "aliases": [ 00:05:43.871 "00349a3d-83ac-44ec-8312-31b2582323c9" 00:05:43.871 ], 00:05:43.871 "product_name": "Malloc disk", 00:05:43.871 "block_size": 512, 00:05:43.871 "num_blocks": 1048576, 00:05:43.871 "uuid": "00349a3d-83ac-44ec-8312-31b2582323c9", 00:05:43.871 "assigned_rate_limits": { 00:05:43.871 "rw_ios_per_sec": 0, 00:05:43.871 "rw_mbytes_per_sec": 0, 00:05:43.871 "r_mbytes_per_sec": 0, 00:05:43.871 "w_mbytes_per_sec": 0 00:05:43.871 }, 00:05:43.871 "claimed": true, 00:05:43.871 "claim_type": "exclusive_write", 00:05:43.871 "zoned": false, 00:05:43.871 "supported_io_types": { 00:05:43.871 "read": true, 00:05:43.871 "write": true, 00:05:43.871 "unmap": true, 00:05:43.871 "flush": true, 00:05:43.871 "reset": true, 00:05:43.871 "nvme_admin": false, 00:05:43.871 "nvme_io": false, 00:05:43.871 "nvme_io_md": false, 00:05:43.871 "write_zeroes": true, 00:05:43.871 "zcopy": true, 00:05:43.871 "get_zone_info": false, 00:05:43.871 "zone_management": false, 00:05:43.871 "zone_append": false, 00:05:43.871 "compare": false, 00:05:43.871 "compare_and_write": false, 00:05:43.871 "abort": true, 00:05:43.871 "seek_hole": false, 00:05:43.871 "seek_data": false, 00:05:43.871 "copy": true, 00:05:43.871 "nvme_iov_md": false 00:05:43.871 }, 00:05:43.871 "memory_domains": [ 00:05:43.871 { 00:05:43.871 "dma_device_id": "system", 00:05:43.871 "dma_device_type": 1 00:05:43.871 }, 00:05:43.871 { 00:05:43.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.871 "dma_device_type": 2 00:05:43.871 } 00:05:43.871 ], 00:05:43.871 "driver_specific": {} 00:05:43.871 } 00:05:43.871 ]' 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:43.871 10:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:44.805 10:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:44.805 10:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:44.805 10:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:44.805 10:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:44.805 10:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:05:46.731 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:05:46.731 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:05:46.731 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:05:46.731 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:05:46.731 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:05:46.731 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:46.732 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:46.989 10:22:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:47.921 10:22:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:48.854 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:48.854 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:48.854 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:48.854 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.854 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:48.854 ************************************ 00:05:48.854 START TEST filesystem_ext4 00:05:48.854 ************************************ 00:05:48.854 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:05:48.855 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:48.855 mke2fs 1.46.5 (30-Dec-2021) 00:05:48.855 Discarding device blocks: 0/522240 done 00:05:48.855 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:48.855 Filesystem UUID: b770e82a-1404-4655-a592-c471aa718bfa 00:05:48.855 Superblock backups stored on blocks: 00:05:48.855 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:48.855 00:05:48.855 Allocating group tables: 0/64 done 00:05:48.855 Writing inode tables: 0/64 done 00:05:49.419 Creating journal (8192 blocks): done 00:05:49.419 Writing superblocks and filesystem accounting information: 0/64 done 00:05:49.419 00:05:49.419 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:05:49.419 10:22:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1097845 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:50.350 00:05:50.350 real 0m1.487s 00:05:50.350 user 0m0.013s 00:05:50.350 sys 0m0.062s 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:50.350 ************************************ 00:05:50.350 END TEST filesystem_ext4 00:05:50.350 ************************************ 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:50.350 ************************************ 00:05:50.350 START TEST filesystem_btrfs 00:05:50.350 ************************************ 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:50.350 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:05:50.351 10:22:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:50.609 btrfs-progs v6.6.2 00:05:50.609 See https://btrfs.readthedocs.io for more information. 00:05:50.609 00:05:50.609 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:50.609 NOTE: several default settings have changed in version 5.15, please make sure 00:05:50.609 this does not affect your deployments: 00:05:50.609 - DUP for metadata (-m dup) 00:05:50.609 - enabled no-holes (-O no-holes) 00:05:50.609 - enabled free-space-tree (-R free-space-tree) 00:05:50.609 00:05:50.609 Label: (null) 00:05:50.609 UUID: 23d3609d-84ee-4f86-b90c-edd71a290a55 00:05:50.609 Node size: 16384 00:05:50.609 Sector size: 4096 00:05:50.609 Filesystem size: 510.00MiB 00:05:50.609 Block group profiles: 00:05:50.609 Data: single 8.00MiB 00:05:50.609 Metadata: DUP 32.00MiB 00:05:50.609 System: DUP 8.00MiB 00:05:50.609 SSD detected: yes 00:05:50.609 Zoned device: no 00:05:50.609 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:50.609 Runtime features: free-space-tree 00:05:50.609 Checksum: crc32c 00:05:50.609 Number of devices: 1 00:05:50.609 Devices: 00:05:50.609 ID SIZE PATH 00:05:50.609 1 510.00MiB /dev/nvme0n1p1 00:05:50.609 00:05:50.609 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:05:50.609 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1097845 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:50.867 00:05:50.867 real 0m0.630s 00:05:50.867 user 0m0.024s 00:05:50.867 sys 0m0.112s 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:50.867 ************************************ 00:05:50.867 END TEST filesystem_btrfs 00:05:50.867 ************************************ 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.867 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.125 ************************************ 00:05:51.125 START TEST filesystem_xfs 00:05:51.125 ************************************ 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:05:51.125 10:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:51.125 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:51.125 = sectsz=512 attr=2, projid32bit=1 00:05:51.125 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:51.125 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:51.125 data = bsize=4096 blocks=130560, imaxpct=25 00:05:51.125 = sunit=0 swidth=0 blks 00:05:51.125 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:51.125 log =internal log bsize=4096 blocks=16384, version=2 00:05:51.125 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:51.125 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:52.056 Discarding blocks...Done. 00:05:52.056 10:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:05:52.056 10:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:54.580 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1097845 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:54.839 00:05:54.839 real 0m3.737s 00:05:54.839 user 0m0.022s 00:05:54.839 sys 0m0.052s 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:54.839 ************************************ 00:05:54.839 END TEST filesystem_xfs 00:05:54.839 ************************************ 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:54.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1097845 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1097845 ']' 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1097845 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1097845 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1097845' 00:05:54.839 killing process with pid 1097845 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1097845 00:05:54.839 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1097845 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:05:55.411 00:05:55.411 real 0m12.803s 00:05:55.411 user 0m49.379s 00:05:55.411 sys 0m1.779s 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.411 ************************************ 00:05:55.411 END TEST nvmf_filesystem_no_in_capsule 00:05:55.411 ************************************ 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.411 ************************************ 00:05:55.411 START TEST nvmf_filesystem_in_capsule 00:05:55.411 ************************************ 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1099548 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1099548 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1099548 ']' 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.411 10:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 [2024-07-15 10:22:43.972212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:55.670 [2024-07-15 10:22:43.972300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:55.670 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.670 [2024-07-15 10:22:44.034396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.670 [2024-07-15 10:22:44.134331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:55.670 [2024-07-15 10:22:44.134383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:55.670 [2024-07-15 10:22:44.134410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:55.670 [2024-07-15 10:22:44.134421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:55.670 [2024-07-15 10:22:44.134436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:55.670 [2024-07-15 10:22:44.134515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.670 [2024-07-15 10:22:44.134573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.670 [2024-07-15 10:22:44.134665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.670 [2024-07-15 10:22:44.134667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 [2024-07-15 10:22:44.270449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 [2024-07-15 10:22:44.450053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:55.928 { 00:05:55.928 "name": "Malloc1", 00:05:55.928 "aliases": [ 00:05:55.928 "96471bcb-3696-496d-97bf-cb8f22409754" 00:05:55.928 ], 00:05:55.928 "product_name": "Malloc disk", 00:05:55.928 "block_size": 512, 00:05:55.928 "num_blocks": 1048576, 00:05:55.928 "uuid": "96471bcb-3696-496d-97bf-cb8f22409754", 00:05:55.928 "assigned_rate_limits": { 00:05:55.928 "rw_ios_per_sec": 0, 00:05:55.928 "rw_mbytes_per_sec": 0, 00:05:55.928 "r_mbytes_per_sec": 0, 00:05:55.928 "w_mbytes_per_sec": 0 00:05:55.928 }, 00:05:55.928 "claimed": true, 00:05:55.928 "claim_type": "exclusive_write", 00:05:55.928 "zoned": false, 00:05:55.928 "supported_io_types": { 00:05:55.928 "read": true, 00:05:55.928 "write": true, 00:05:55.928 "unmap": true, 00:05:55.928 "flush": true, 00:05:55.928 "reset": true, 00:05:55.928 "nvme_admin": false, 00:05:55.928 "nvme_io": false, 00:05:55.928 "nvme_io_md": false, 00:05:55.928 "write_zeroes": true, 00:05:55.928 "zcopy": true, 00:05:55.928 "get_zone_info": false, 00:05:55.928 "zone_management": false, 00:05:55.928 "zone_append": false, 00:05:55.928 "compare": false, 00:05:55.928 "compare_and_write": false, 00:05:55.928 "abort": true, 00:05:55.928 "seek_hole": false, 00:05:55.928 "seek_data": false, 00:05:55.928 "copy": true, 00:05:55.928 "nvme_iov_md": false 00:05:55.928 }, 00:05:55.928 "memory_domains": [ 00:05:55.928 { 00:05:55.928 "dma_device_id": "system", 00:05:55.928 "dma_device_type": 1 00:05:55.928 }, 00:05:55.928 { 00:05:55.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.928 "dma_device_type": 2 00:05:55.928 } 00:05:55.928 ], 00:05:55.928 "driver_specific": {} 00:05:55.928 } 00:05:55.928 ]' 00:05:55.928 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:56.186 10:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:56.750 10:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:56.750 10:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:56.750 10:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:56.750 10:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:56.750 10:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:58.641 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:59.203 10:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:59.767 10:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.137 ************************************ 00:06:01.137 START TEST filesystem_in_capsule_ext4 00:06:01.137 ************************************ 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:01.137 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:01.137 mke2fs 1.46.5 (30-Dec-2021) 00:06:01.137 Discarding device blocks: 0/522240 done 00:06:01.137 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:01.137 Filesystem UUID: 3af9157c-ad61-412c-b388-dcea8035f7a7 00:06:01.137 Superblock backups stored on blocks: 00:06:01.137 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:01.137 00:06:01.137 Allocating group tables: 0/64 done 00:06:01.137 Writing inode tables: 0/64 done 00:06:01.137 Creating journal (8192 blocks): done 00:06:01.138 Writing superblocks and filesystem accounting information: 0/64 done 00:06:01.138 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:01.138 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1099548 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:01.396 00:06:01.396 real 0m0.425s 00:06:01.396 user 0m0.021s 00:06:01.396 sys 0m0.057s 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:01.396 ************************************ 00:06:01.396 END TEST filesystem_in_capsule_ext4 00:06:01.396 ************************************ 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.396 ************************************ 00:06:01.396 START TEST filesystem_in_capsule_btrfs 00:06:01.396 ************************************ 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:01.396 10:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:01.655 btrfs-progs v6.6.2 00:06:01.655 See https://btrfs.readthedocs.io for more information. 00:06:01.655 00:06:01.655 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:01.655 NOTE: several default settings have changed in version 5.15, please make sure 00:06:01.655 this does not affect your deployments: 00:06:01.655 - DUP for metadata (-m dup) 00:06:01.655 - enabled no-holes (-O no-holes) 00:06:01.655 - enabled free-space-tree (-R free-space-tree) 00:06:01.655 00:06:01.655 Label: (null) 00:06:01.655 UUID: 1e993cf0-a954-4c89-8424-f089cf21f74b 00:06:01.655 Node size: 16384 00:06:01.655 Sector size: 4096 00:06:01.655 Filesystem size: 510.00MiB 00:06:01.655 Block group profiles: 00:06:01.655 Data: single 8.00MiB 00:06:01.655 Metadata: DUP 32.00MiB 00:06:01.655 System: DUP 8.00MiB 00:06:01.655 SSD detected: yes 00:06:01.655 Zoned device: no 00:06:01.655 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:01.655 Runtime features: free-space-tree 00:06:01.655 Checksum: crc32c 00:06:01.655 Number of devices: 1 00:06:01.655 Devices: 00:06:01.655 ID SIZE PATH 00:06:01.655 1 510.00MiB /dev/nvme0n1p1 00:06:01.655 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:01.655 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1099548 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:01.913 00:06:01.913 real 0m0.450s 00:06:01.913 user 0m0.016s 00:06:01.913 sys 0m0.113s 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:01.913 ************************************ 00:06:01.913 END TEST filesystem_in_capsule_btrfs 00:06:01.913 ************************************ 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.913 ************************************ 00:06:01.913 START TEST filesystem_in_capsule_xfs 00:06:01.913 ************************************ 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:01.913 10:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:01.913 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:01.913 = sectsz=512 attr=2, projid32bit=1 00:06:01.913 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:01.913 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:01.913 data = bsize=4096 blocks=130560, imaxpct=25 00:06:01.913 = sunit=0 swidth=0 blks 00:06:01.913 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:01.913 log =internal log bsize=4096 blocks=16384, version=2 00:06:01.913 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:01.913 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:02.844 Discarding blocks...Done. 00:06:02.844 10:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:02.844 10:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1099548 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:04.742 00:06:04.742 real 0m2.569s 00:06:04.742 user 0m0.020s 00:06:04.742 sys 0m0.059s 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:04.742 ************************************ 00:06:04.742 END TEST filesystem_in_capsule_xfs 00:06:04.742 ************************************ 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:04.742 10:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:04.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1099548 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1099548 ']' 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1099548 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1099548 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1099548' 00:06:04.742 killing process with pid 1099548 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1099548 00:06:04.742 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1099548 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:05.308 00:06:05.308 real 0m9.648s 00:06:05.308 user 0m36.793s 00:06:05.308 sys 0m1.597s 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.308 ************************************ 00:06:05.308 END TEST nvmf_filesystem_in_capsule 00:06:05.308 ************************************ 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:05.308 rmmod nvme_tcp 00:06:05.308 rmmod nvme_fabrics 00:06:05.308 rmmod nvme_keyring 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:05.308 10:22:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.211 10:22:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:07.211 00:06:07.211 real 0m27.104s 00:06:07.211 user 1m27.145s 00:06:07.211 sys 0m5.061s 00:06:07.211 10:22:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.211 10:22:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.211 ************************************ 00:06:07.211 END TEST nvmf_filesystem 00:06:07.211 ************************************ 00:06:07.211 10:22:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:07.211 10:22:55 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:07.211 10:22:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:07.211 10:22:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.211 10:22:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.211 ************************************ 00:06:07.211 START TEST nvmf_target_discovery 00:06:07.211 ************************************ 00:06:07.211 10:22:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:07.470 * Looking for test storage... 00:06:07.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:07.470 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:07.471 10:22:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:09.374 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:09.374 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.374 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:09.634 Found net devices under 0000:09:00.0: cvl_0_0 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:09.634 Found net devices under 0000:09:00.1: cvl_0_1 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.634 10:22:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:09.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:06:09.634 00:06:09.634 --- 10.0.0.2 ping statistics --- 00:06:09.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.634 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:06:09.634 00:06:09.634 --- 10.0.0.1 ping statistics --- 00:06:09.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.634 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:09.634 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1102829 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1102829 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1102829 ']' 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.635 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:09.635 [2024-07-15 10:22:58.141925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:09.635 [2024-07-15 10:22:58.142022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.893 [2024-07-15 10:22:58.208147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.893 [2024-07-15 10:22:58.318631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.893 [2024-07-15 10:22:58.318684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.893 [2024-07-15 10:22:58.318697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.893 [2024-07-15 10:22:58.318708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.893 [2024-07-15 10:22:58.318717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.893 [2024-07-15 10:22:58.318799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.893 [2024-07-15 10:22:58.318877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.893 [2024-07-15 10:22:58.318949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.893 [2024-07-15 10:22:58.318952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.893 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.893 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:09.893 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:09.893 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.893 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 [2024-07-15 10:22:58.467606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 Null1 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 [2024-07-15 10:22:58.507898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 Null2 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 Null3 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 Null4 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.152 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.153 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:06:10.411 00:06:10.411 Discovery Log Number of Records 6, Generation counter 6 00:06:10.411 =====Discovery Log Entry 0====== 00:06:10.411 trtype: tcp 00:06:10.411 adrfam: ipv4 00:06:10.411 subtype: current discovery subsystem 00:06:10.411 treq: not required 00:06:10.411 portid: 0 00:06:10.411 trsvcid: 4420 00:06:10.411 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:10.411 traddr: 10.0.0.2 00:06:10.411 eflags: explicit discovery connections, duplicate discovery information 00:06:10.411 sectype: none 00:06:10.411 =====Discovery Log Entry 1====== 00:06:10.411 trtype: tcp 00:06:10.411 adrfam: ipv4 00:06:10.411 subtype: nvme subsystem 00:06:10.411 treq: not required 00:06:10.411 portid: 0 00:06:10.411 trsvcid: 4420 00:06:10.411 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:10.411 traddr: 10.0.0.2 00:06:10.411 eflags: none 00:06:10.411 sectype: none 00:06:10.411 =====Discovery Log Entry 2====== 00:06:10.411 trtype: tcp 00:06:10.411 adrfam: ipv4 00:06:10.411 subtype: nvme subsystem 00:06:10.411 treq: not required 00:06:10.411 portid: 0 00:06:10.411 trsvcid: 4420 00:06:10.411 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:10.411 traddr: 10.0.0.2 00:06:10.411 eflags: none 00:06:10.411 sectype: none 00:06:10.411 =====Discovery Log Entry 3====== 00:06:10.411 trtype: tcp 00:06:10.411 adrfam: ipv4 00:06:10.411 subtype: nvme subsystem 00:06:10.411 treq: not required 00:06:10.411 portid: 0 00:06:10.411 trsvcid: 4420 00:06:10.411 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:10.411 traddr: 10.0.0.2 00:06:10.411 eflags: none 00:06:10.411 sectype: none 00:06:10.411 =====Discovery Log Entry 4====== 00:06:10.411 trtype: tcp 00:06:10.411 adrfam: ipv4 00:06:10.411 subtype: nvme subsystem 00:06:10.411 treq: not required 00:06:10.411 portid: 0 00:06:10.411 trsvcid: 4420 00:06:10.411 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:10.411 traddr: 10.0.0.2 00:06:10.411 eflags: none 00:06:10.411 sectype: none 00:06:10.411 =====Discovery Log Entry 5====== 00:06:10.411 trtype: tcp 00:06:10.411 adrfam: ipv4 00:06:10.411 subtype: discovery subsystem referral 00:06:10.411 treq: not required 00:06:10.411 portid: 0 00:06:10.411 trsvcid: 4430 00:06:10.411 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:10.411 traddr: 10.0.0.2 00:06:10.411 eflags: none 00:06:10.411 sectype: none 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:10.411 Perform nvmf subsystem discovery via RPC 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.411 [ 00:06:10.411 { 00:06:10.411 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:10.411 "subtype": "Discovery", 00:06:10.411 "listen_addresses": [ 00:06:10.411 { 00:06:10.411 "trtype": "TCP", 00:06:10.411 "adrfam": "IPv4", 00:06:10.411 "traddr": "10.0.0.2", 00:06:10.411 "trsvcid": "4420" 00:06:10.411 } 00:06:10.411 ], 00:06:10.411 "allow_any_host": true, 00:06:10.411 "hosts": [] 00:06:10.411 }, 00:06:10.411 { 00:06:10.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:10.411 "subtype": "NVMe", 00:06:10.411 "listen_addresses": [ 00:06:10.411 { 00:06:10.411 "trtype": "TCP", 00:06:10.411 "adrfam": "IPv4", 00:06:10.411 "traddr": "10.0.0.2", 00:06:10.411 "trsvcid": "4420" 00:06:10.411 } 00:06:10.411 ], 00:06:10.411 "allow_any_host": true, 00:06:10.411 "hosts": [], 00:06:10.411 "serial_number": "SPDK00000000000001", 00:06:10.411 "model_number": "SPDK bdev Controller", 00:06:10.411 "max_namespaces": 32, 00:06:10.411 "min_cntlid": 1, 00:06:10.411 "max_cntlid": 65519, 00:06:10.411 "namespaces": [ 00:06:10.411 { 00:06:10.411 "nsid": 1, 00:06:10.411 "bdev_name": "Null1", 00:06:10.411 "name": "Null1", 00:06:10.411 "nguid": "EA1449E142B94B57A383065BC56C394C", 00:06:10.411 "uuid": "ea1449e1-42b9-4b57-a383-065bc56c394c" 00:06:10.411 } 00:06:10.411 ] 00:06:10.411 }, 00:06:10.411 { 00:06:10.411 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:10.411 "subtype": "NVMe", 00:06:10.411 "listen_addresses": [ 00:06:10.411 { 00:06:10.411 "trtype": "TCP", 00:06:10.411 "adrfam": "IPv4", 00:06:10.411 "traddr": "10.0.0.2", 00:06:10.411 "trsvcid": "4420" 00:06:10.411 } 00:06:10.411 ], 00:06:10.411 "allow_any_host": true, 00:06:10.411 "hosts": [], 00:06:10.411 "serial_number": "SPDK00000000000002", 00:06:10.411 "model_number": "SPDK bdev Controller", 00:06:10.411 "max_namespaces": 32, 00:06:10.411 "min_cntlid": 1, 00:06:10.411 "max_cntlid": 65519, 00:06:10.411 "namespaces": [ 00:06:10.411 { 00:06:10.411 "nsid": 1, 00:06:10.411 "bdev_name": "Null2", 00:06:10.411 "name": "Null2", 00:06:10.411 "nguid": "15B38D83501541239EA6324E86BE2329", 00:06:10.411 "uuid": "15b38d83-5015-4123-9ea6-324e86be2329" 00:06:10.411 } 00:06:10.411 ] 00:06:10.411 }, 00:06:10.411 { 00:06:10.411 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:10.411 "subtype": "NVMe", 00:06:10.411 "listen_addresses": [ 00:06:10.411 { 00:06:10.411 "trtype": "TCP", 00:06:10.411 "adrfam": "IPv4", 00:06:10.411 "traddr": "10.0.0.2", 00:06:10.411 "trsvcid": "4420" 00:06:10.411 } 00:06:10.411 ], 00:06:10.411 "allow_any_host": true, 00:06:10.411 "hosts": [], 00:06:10.411 "serial_number": "SPDK00000000000003", 00:06:10.411 "model_number": "SPDK bdev Controller", 00:06:10.411 "max_namespaces": 32, 00:06:10.411 "min_cntlid": 1, 00:06:10.411 "max_cntlid": 65519, 00:06:10.411 "namespaces": [ 00:06:10.411 { 00:06:10.411 "nsid": 1, 00:06:10.411 "bdev_name": "Null3", 00:06:10.411 "name": "Null3", 00:06:10.411 "nguid": "BA9A528E731247CA9A7DD8B2F42FF386", 00:06:10.411 "uuid": "ba9a528e-7312-47ca-9a7d-d8b2f42ff386" 00:06:10.411 } 00:06:10.411 ] 00:06:10.411 }, 00:06:10.411 { 00:06:10.411 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:10.411 "subtype": "NVMe", 00:06:10.411 "listen_addresses": [ 00:06:10.411 { 00:06:10.411 "trtype": "TCP", 00:06:10.411 "adrfam": "IPv4", 00:06:10.411 "traddr": "10.0.0.2", 00:06:10.411 "trsvcid": "4420" 00:06:10.411 } 00:06:10.411 ], 00:06:10.411 "allow_any_host": true, 00:06:10.411 "hosts": [], 00:06:10.411 "serial_number": "SPDK00000000000004", 00:06:10.411 "model_number": "SPDK bdev Controller", 00:06:10.411 "max_namespaces": 32, 00:06:10.411 "min_cntlid": 1, 00:06:10.411 "max_cntlid": 65519, 00:06:10.411 "namespaces": [ 00:06:10.411 { 00:06:10.411 "nsid": 1, 00:06:10.411 "bdev_name": "Null4", 00:06:10.411 "name": "Null4", 00:06:10.411 "nguid": "720BE0097DD045848B6E14703D4E77A3", 00:06:10.411 "uuid": "720be009-7dd0-4584-8b6e-14703d4e77a3" 00:06:10.411 } 00:06:10.411 ] 00:06:10.411 } 00:06:10.411 ] 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:10.411 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:10.412 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:10.412 rmmod nvme_tcp 00:06:10.412 rmmod nvme_fabrics 00:06:10.670 rmmod nvme_keyring 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1102829 ']' 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1102829 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1102829 ']' 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1102829 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.670 10:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1102829 00:06:10.670 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.670 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.670 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1102829' 00:06:10.670 killing process with pid 1102829 00:06:10.670 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1102829 00:06:10.670 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1102829 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:10.929 10:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.831 10:23:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:12.831 00:06:12.831 real 0m5.572s 00:06:12.831 user 0m4.553s 00:06:12.831 sys 0m1.906s 00:06:12.831 10:23:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.831 10:23:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:12.831 ************************************ 00:06:12.831 END TEST nvmf_target_discovery 00:06:12.831 ************************************ 00:06:12.831 10:23:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:12.831 10:23:01 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:12.831 10:23:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:12.831 10:23:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.831 10:23:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.831 ************************************ 00:06:12.831 START TEST nvmf_referrals 00:06:12.831 ************************************ 00:06:12.831 10:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:13.089 * Looking for test storage... 00:06:13.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:13.090 10:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.048 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:15.049 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:15.049 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:15.049 Found net devices under 0000:09:00.0: cvl_0_0 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:15.049 Found net devices under 0000:09:00.1: cvl_0_1 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.049 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:15.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:06:15.307 00:06:15.307 --- 10.0.0.2 ping statistics --- 00:06:15.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.307 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:06:15.307 00:06:15.307 --- 10.0.0.1 ping statistics --- 00:06:15.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.307 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1104855 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1104855 00:06:15.307 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1104855 ']' 00:06:15.308 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.308 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.308 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.308 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.308 10:23:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.308 [2024-07-15 10:23:03.766620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:15.308 [2024-07-15 10:23:03.766700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.308 [2024-07-15 10:23:03.831437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.565 [2024-07-15 10:23:03.941162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.565 [2024-07-15 10:23:03.941214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.565 [2024-07-15 10:23:03.941227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.565 [2024-07-15 10:23:03.941238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.565 [2024-07-15 10:23:03.941247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.565 [2024-07-15 10:23:03.941306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.565 [2024-07-15 10:23:03.941374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.565 [2024-07-15 10:23:03.941496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.565 [2024-07-15 10:23:03.941498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.565 [2024-07-15 10:23:04.098447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.565 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.565 [2024-07-15 10:23:04.110684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:15.822 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:15.823 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.080 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.081 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.338 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:16.595 10:23:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:16.595 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:16.852 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:17.109 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:17.110 rmmod nvme_tcp 00:06:17.110 rmmod nvme_fabrics 00:06:17.110 rmmod nvme_keyring 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1104855 ']' 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1104855 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1104855 ']' 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1104855 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.110 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1104855 00:06:17.368 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.368 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.368 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1104855' 00:06:17.368 killing process with pid 1104855 00:06:17.368 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1104855 00:06:17.368 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1104855 00:06:17.368 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.369 10:23:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.902 10:23:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:19.903 00:06:19.903 real 0m6.588s 00:06:19.903 user 0m9.229s 00:06:19.903 sys 0m2.160s 00:06:19.903 10:23:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.903 10:23:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:19.903 ************************************ 00:06:19.903 END TEST nvmf_referrals 00:06:19.903 ************************************ 00:06:19.903 10:23:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:19.903 10:23:07 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:19.903 10:23:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:19.903 10:23:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.903 10:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.903 ************************************ 00:06:19.903 START TEST nvmf_connect_disconnect 00:06:19.903 ************************************ 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:19.903 * Looking for test storage... 00:06:19.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:19.903 10:23:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.806 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:21.807 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:21.807 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:21.807 Found net devices under 0000:09:00.0: cvl_0_0 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:21.807 Found net devices under 0000:09:00.1: cvl_0_1 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:21.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:06:21.807 00:06:21.807 --- 10.0.0.2 ping statistics --- 00:06:21.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.807 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:06:21.807 00:06:21.807 --- 10.0.0.1 ping statistics --- 00:06:21.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.807 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1107146 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1107146 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1107146 ']' 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.807 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:21.807 [2024-07-15 10:23:10.293868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:21.807 [2024-07-15 10:23:10.293942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.065 [2024-07-15 10:23:10.358315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.065 [2024-07-15 10:23:10.469558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.065 [2024-07-15 10:23:10.469623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.065 [2024-07-15 10:23:10.469637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.065 [2024-07-15 10:23:10.469648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.065 [2024-07-15 10:23:10.469657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.065 [2024-07-15 10:23:10.469738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.065 [2024-07-15 10:23:10.469843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.065 [2024-07-15 10:23:10.469882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.065 [2024-07-15 10:23:10.469886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.065 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:22.065 [2024-07-15 10:23:10.612487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:22.323 [2024-07-15 10:23:10.663564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:22.323 10:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:24.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:28.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:30.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:33.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.705 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:35.705 rmmod nvme_tcp 00:06:35.705 rmmod nvme_fabrics 00:06:35.963 rmmod nvme_keyring 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1107146 ']' 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1107146 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1107146 ']' 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1107146 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1107146 00:06:35.963 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.964 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.964 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1107146' 00:06:35.964 killing process with pid 1107146 00:06:35.964 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1107146 00:06:35.964 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1107146 00:06:36.221 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:36.221 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:36.221 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:36.221 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:36.221 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:36.221 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.222 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.222 10:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.129 10:23:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:38.129 00:06:38.129 real 0m18.624s 00:06:38.129 user 0m55.858s 00:06:38.129 sys 0m3.205s 00:06:38.129 10:23:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.129 10:23:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:38.129 ************************************ 00:06:38.129 END TEST nvmf_connect_disconnect 00:06:38.129 ************************************ 00:06:38.129 10:23:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:38.129 10:23:26 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:38.129 10:23:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.129 10:23:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.129 10:23:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.388 ************************************ 00:06:38.388 START TEST nvmf_multitarget 00:06:38.388 ************************************ 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:38.388 * Looking for test storage... 00:06:38.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.388 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.389 10:23:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:40.291 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:40.291 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:40.291 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:40.548 Found net devices under 0000:09:00.0: cvl_0_0 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:40.548 Found net devices under 0000:09:00.1: cvl_0_1 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.548 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:40.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:06:40.549 00:06:40.549 --- 10.0.0.2 ping statistics --- 00:06:40.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.549 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:06:40.549 00:06:40.549 --- 10.0.0.1 ping statistics --- 00:06:40.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.549 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:40.549 10:23:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1110902 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1110902 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1110902 ']' 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.549 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 [2024-07-15 10:23:29.051709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.549 [2024-07-15 10:23:29.051821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.806 [2024-07-15 10:23:29.116164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.806 [2024-07-15 10:23:29.216918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.806 [2024-07-15 10:23:29.216971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.806 [2024-07-15 10:23:29.216995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.806 [2024-07-15 10:23:29.217005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.806 [2024-07-15 10:23:29.217016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.806 [2024-07-15 10:23:29.217079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.806 [2024-07-15 10:23:29.217155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.806 [2024-07-15 10:23:29.217228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.806 [2024-07-15 10:23:29.217230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.806 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.806 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:40.806 10:23:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:40.806 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.806 10:23:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:41.063 "nvmf_tgt_1" 00:06:41.063 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:41.319 "nvmf_tgt_2" 00:06:41.319 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:41.319 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:41.319 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:41.319 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:41.577 true 00:06:41.577 10:23:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:41.577 true 00:06:41.577 10:23:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:41.577 10:23:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:41.835 rmmod nvme_tcp 00:06:41.835 rmmod nvme_fabrics 00:06:41.835 rmmod nvme_keyring 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1110902 ']' 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1110902 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1110902 ']' 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1110902 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110902 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110902' 00:06:41.835 killing process with pid 1110902 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1110902 00:06:41.835 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1110902 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.093 10:23:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.999 10:23:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:43.999 00:06:43.999 real 0m5.851s 00:06:43.999 user 0m6.438s 00:06:43.999 sys 0m1.979s 00:06:43.999 10:23:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.999 10:23:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:43.999 ************************************ 00:06:43.999 END TEST nvmf_multitarget 00:06:43.999 ************************************ 00:06:44.257 10:23:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:44.257 10:23:32 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:44.257 10:23:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.257 10:23:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.257 10:23:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.257 ************************************ 00:06:44.257 START TEST nvmf_rpc 00:06:44.257 ************************************ 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:44.257 * Looking for test storage... 00:06:44.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.257 10:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:46.815 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:46.815 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.815 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:46.815 Found net devices under 0000:09:00.0: cvl_0_0 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:46.816 Found net devices under 0000:09:00.1: cvl_0_1 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:46.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:06:46.816 00:06:46.816 --- 10.0.0.2 ping statistics --- 00:06:46.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.816 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:06:46.816 00:06:46.816 --- 10.0.0.1 ping statistics --- 00:06:46.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.816 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1113007 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1113007 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1113007 ']' 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.816 10:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 [2024-07-15 10:23:34.958517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.816 [2024-07-15 10:23:34.958596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.816 [2024-07-15 10:23:35.032411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.816 [2024-07-15 10:23:35.142662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.816 [2024-07-15 10:23:35.142715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.816 [2024-07-15 10:23:35.142730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.816 [2024-07-15 10:23:35.142740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.816 [2024-07-15 10:23:35.142750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.816 [2024-07-15 10:23:35.142798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.816 [2024-07-15 10:23:35.142841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.816 [2024-07-15 10:23:35.142921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.816 [2024-07-15 10:23:35.142924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.749 10:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:47.749 "tick_rate": 2700000000, 00:06:47.749 "poll_groups": [ 00:06:47.749 { 00:06:47.749 "name": "nvmf_tgt_poll_group_000", 00:06:47.749 "admin_qpairs": 0, 00:06:47.749 "io_qpairs": 0, 00:06:47.749 "current_admin_qpairs": 0, 00:06:47.749 "current_io_qpairs": 0, 00:06:47.749 "pending_bdev_io": 0, 00:06:47.749 "completed_nvme_io": 0, 00:06:47.749 "transports": [] 00:06:47.749 }, 00:06:47.749 { 00:06:47.749 "name": "nvmf_tgt_poll_group_001", 00:06:47.749 "admin_qpairs": 0, 00:06:47.749 "io_qpairs": 0, 00:06:47.749 "current_admin_qpairs": 0, 00:06:47.749 "current_io_qpairs": 0, 00:06:47.749 "pending_bdev_io": 0, 00:06:47.749 "completed_nvme_io": 0, 00:06:47.749 "transports": [] 00:06:47.749 }, 00:06:47.749 { 00:06:47.749 "name": "nvmf_tgt_poll_group_002", 00:06:47.749 "admin_qpairs": 0, 00:06:47.749 "io_qpairs": 0, 00:06:47.749 "current_admin_qpairs": 0, 00:06:47.749 "current_io_qpairs": 0, 00:06:47.749 "pending_bdev_io": 0, 00:06:47.749 "completed_nvme_io": 0, 00:06:47.749 "transports": [] 00:06:47.750 }, 00:06:47.750 { 00:06:47.750 "name": "nvmf_tgt_poll_group_003", 00:06:47.750 "admin_qpairs": 0, 00:06:47.750 "io_qpairs": 0, 00:06:47.750 "current_admin_qpairs": 0, 00:06:47.750 "current_io_qpairs": 0, 00:06:47.750 "pending_bdev_io": 0, 00:06:47.750 "completed_nvme_io": 0, 00:06:47.750 "transports": [] 00:06:47.750 } 00:06:47.750 ] 00:06:47.750 }' 00:06:47.750 10:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:47.750 10:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:47.750 10:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:47.750 10:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 [2024-07-15 10:23:36.042179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:47.750 "tick_rate": 2700000000, 00:06:47.750 "poll_groups": [ 00:06:47.750 { 00:06:47.750 "name": "nvmf_tgt_poll_group_000", 00:06:47.750 "admin_qpairs": 0, 00:06:47.750 "io_qpairs": 0, 00:06:47.750 "current_admin_qpairs": 0, 00:06:47.750 "current_io_qpairs": 0, 00:06:47.750 "pending_bdev_io": 0, 00:06:47.750 "completed_nvme_io": 0, 00:06:47.750 "transports": [ 00:06:47.750 { 00:06:47.750 "trtype": "TCP" 00:06:47.750 } 00:06:47.750 ] 00:06:47.750 }, 00:06:47.750 { 00:06:47.750 "name": "nvmf_tgt_poll_group_001", 00:06:47.750 "admin_qpairs": 0, 00:06:47.750 "io_qpairs": 0, 00:06:47.750 "current_admin_qpairs": 0, 00:06:47.750 "current_io_qpairs": 0, 00:06:47.750 "pending_bdev_io": 0, 00:06:47.750 "completed_nvme_io": 0, 00:06:47.750 "transports": [ 00:06:47.750 { 00:06:47.750 "trtype": "TCP" 00:06:47.750 } 00:06:47.750 ] 00:06:47.750 }, 00:06:47.750 { 00:06:47.750 "name": "nvmf_tgt_poll_group_002", 00:06:47.750 "admin_qpairs": 0, 00:06:47.750 "io_qpairs": 0, 00:06:47.750 "current_admin_qpairs": 0, 00:06:47.750 "current_io_qpairs": 0, 00:06:47.750 "pending_bdev_io": 0, 00:06:47.750 "completed_nvme_io": 0, 00:06:47.750 "transports": [ 00:06:47.750 { 00:06:47.750 "trtype": "TCP" 00:06:47.750 } 00:06:47.750 ] 00:06:47.750 }, 00:06:47.750 { 00:06:47.750 "name": "nvmf_tgt_poll_group_003", 00:06:47.750 "admin_qpairs": 0, 00:06:47.750 "io_qpairs": 0, 00:06:47.750 "current_admin_qpairs": 0, 00:06:47.750 "current_io_qpairs": 0, 00:06:47.750 "pending_bdev_io": 0, 00:06:47.750 "completed_nvme_io": 0, 00:06:47.750 "transports": [ 00:06:47.750 { 00:06:47.750 "trtype": "TCP" 00:06:47.750 } 00:06:47.750 ] 00:06:47.750 } 00:06:47.750 ] 00:06:47.750 }' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 Malloc1 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 [2024-07-15 10:23:36.199238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:06:47.750 [2024-07-15 10:23:36.221642] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:06:47.750 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:47.750 could not add new controller: failed to write to nvme-fabrics device 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.750 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:48.689 10:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:48.689 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:48.689 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:48.689 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:48.689 10:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:50.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.586 10:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.586 [2024-07-15 10:23:39.021754] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:06:50.586 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:50.586 could not add new controller: failed to write to nvme-fabrics device 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.586 10:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:51.151 10:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:51.151 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:51.151 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:51.151 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:51.151 10:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:53.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.678 [2024-07-15 10:23:41.793357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.678 10:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:53.935 10:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:06:53.935 10:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:53.935 10:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:53.935 10:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:53.935 10:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:56.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.455 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.456 [2024-07-15 10:23:44.597876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.456 10:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.018 10:23:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:06:57.018 10:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:57.018 10:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:57.018 10:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:57.018 10:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:58.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:58.909 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.910 [2024-07-15 10:23:47.445385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.910 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.166 10:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.166 10:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.730 10:23:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.730 10:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:59.730 10:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.730 10:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:59.730 10:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.622 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 [2024-07-15 10:23:50.218755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.879 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.880 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.880 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.443 10:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.443 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:02.443 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.443 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:02.443 10:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:04.340 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:04.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.598 [2024-07-15 10:23:52.937820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.598 10:23:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.164 10:23:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.164 10:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.164 10:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.164 10:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.164 10:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:07.061 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 [2024-07-15 10:23:55.701275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 [2024-07-15 10:23:55.749344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.319 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 [2024-07-15 10:23:55.797501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 [2024-07-15 10:23:55.845649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.320 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 [2024-07-15 10:23:55.893858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:07.578 "tick_rate": 2700000000, 00:07:07.578 "poll_groups": [ 00:07:07.578 { 00:07:07.578 "name": "nvmf_tgt_poll_group_000", 00:07:07.578 "admin_qpairs": 2, 00:07:07.578 "io_qpairs": 84, 00:07:07.578 "current_admin_qpairs": 0, 00:07:07.578 "current_io_qpairs": 0, 00:07:07.578 "pending_bdev_io": 0, 00:07:07.578 "completed_nvme_io": 233, 00:07:07.578 "transports": [ 00:07:07.578 { 00:07:07.578 "trtype": "TCP" 00:07:07.578 } 00:07:07.578 ] 00:07:07.578 }, 00:07:07.578 { 00:07:07.578 "name": "nvmf_tgt_poll_group_001", 00:07:07.578 "admin_qpairs": 2, 00:07:07.578 "io_qpairs": 84, 00:07:07.578 "current_admin_qpairs": 0, 00:07:07.578 "current_io_qpairs": 0, 00:07:07.578 "pending_bdev_io": 0, 00:07:07.578 "completed_nvme_io": 135, 00:07:07.578 "transports": [ 00:07:07.578 { 00:07:07.578 "trtype": "TCP" 00:07:07.578 } 00:07:07.578 ] 00:07:07.578 }, 00:07:07.578 { 00:07:07.578 "name": "nvmf_tgt_poll_group_002", 00:07:07.578 "admin_qpairs": 1, 00:07:07.578 "io_qpairs": 84, 00:07:07.578 "current_admin_qpairs": 0, 00:07:07.578 "current_io_qpairs": 0, 00:07:07.578 "pending_bdev_io": 0, 00:07:07.578 "completed_nvme_io": 232, 00:07:07.578 "transports": [ 00:07:07.578 { 00:07:07.578 "trtype": "TCP" 00:07:07.578 } 00:07:07.578 ] 00:07:07.578 }, 00:07:07.578 { 00:07:07.578 "name": "nvmf_tgt_poll_group_003", 00:07:07.578 "admin_qpairs": 2, 00:07:07.578 "io_qpairs": 84, 00:07:07.578 "current_admin_qpairs": 0, 00:07:07.578 "current_io_qpairs": 0, 00:07:07.578 "pending_bdev_io": 0, 00:07:07.578 "completed_nvme_io": 86, 00:07:07.578 "transports": [ 00:07:07.578 { 00:07:07.578 "trtype": "TCP" 00:07:07.578 } 00:07:07.578 ] 00:07:07.578 } 00:07:07.578 ] 00:07:07.578 }' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:07.578 10:23:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.578 rmmod nvme_tcp 00:07:07.578 rmmod nvme_fabrics 00:07:07.578 rmmod nvme_keyring 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1113007 ']' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1113007 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1113007 ']' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1113007 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113007 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.578 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113007' 00:07:07.578 killing process with pid 1113007 00:07:07.579 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1113007 00:07:07.579 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1113007 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.144 10:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.052 10:23:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.052 00:07:10.052 real 0m25.860s 00:07:10.052 user 1m24.342s 00:07:10.052 sys 0m4.173s 00:07:10.052 10:23:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.052 10:23:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.052 ************************************ 00:07:10.052 END TEST nvmf_rpc 00:07:10.052 ************************************ 00:07:10.052 10:23:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:10.052 10:23:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:10.052 10:23:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.052 10:23:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.052 10:23:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.052 ************************************ 00:07:10.052 START TEST nvmf_invalid 00:07:10.052 ************************************ 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:10.052 * Looking for test storage... 00:07:10.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.052 10:23:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:12.587 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:12.587 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:12.587 Found net devices under 0000:09:00.0: cvl_0_0 00:07:12.587 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:12.588 Found net devices under 0000:09:00.1: cvl_0_1 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:12.588 00:07:12.588 --- 10.0.0.2 ping statistics --- 00:07:12.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.588 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:07:12.588 00:07:12.588 --- 10.0.0.1 ping statistics --- 00:07:12.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.588 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1117566 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1117566 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1117566 ']' 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.588 10:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:12.588 [2024-07-15 10:24:00.804536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:12.588 [2024-07-15 10:24:00.804613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.588 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.588 [2024-07-15 10:24:00.873011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.588 [2024-07-15 10:24:00.979758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.588 [2024-07-15 10:24:00.979814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.588 [2024-07-15 10:24:00.979838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.588 [2024-07-15 10:24:00.979848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.588 [2024-07-15 10:24:00.979857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.588 [2024-07-15 10:24:00.979947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.588 [2024-07-15 10:24:00.980003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.588 [2024-07-15 10:24:00.980067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.588 [2024-07-15 10:24:00.980070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:12.588 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22039 00:07:12.845 [2024-07-15 10:24:01.354163] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:12.845 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:12.845 { 00:07:12.845 "nqn": "nqn.2016-06.io.spdk:cnode22039", 00:07:12.845 "tgt_name": "foobar", 00:07:12.845 "method": "nvmf_create_subsystem", 00:07:12.845 "req_id": 1 00:07:12.845 } 00:07:12.845 Got JSON-RPC error response 00:07:12.845 response: 00:07:12.845 { 00:07:12.845 "code": -32603, 00:07:12.845 "message": "Unable to find target foobar" 00:07:12.845 }' 00:07:12.845 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:12.845 { 00:07:12.845 "nqn": "nqn.2016-06.io.spdk:cnode22039", 00:07:12.845 "tgt_name": "foobar", 00:07:12.845 "method": "nvmf_create_subsystem", 00:07:12.845 "req_id": 1 00:07:12.845 } 00:07:12.845 Got JSON-RPC error response 00:07:12.845 response: 00:07:12.845 { 00:07:12.845 "code": -32603, 00:07:12.845 "message": "Unable to find target foobar" 00:07:12.845 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:12.845 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:12.845 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1789 00:07:13.103 [2024-07-15 10:24:01.615019] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1789: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:13.103 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:13.103 { 00:07:13.103 "nqn": "nqn.2016-06.io.spdk:cnode1789", 00:07:13.103 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:13.103 "method": "nvmf_create_subsystem", 00:07:13.103 "req_id": 1 00:07:13.103 } 00:07:13.103 Got JSON-RPC error response 00:07:13.103 response: 00:07:13.103 { 00:07:13.103 "code": -32602, 00:07:13.103 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:13.103 }' 00:07:13.103 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:13.103 { 00:07:13.103 "nqn": "nqn.2016-06.io.spdk:cnode1789", 00:07:13.103 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:13.103 "method": "nvmf_create_subsystem", 00:07:13.103 "req_id": 1 00:07:13.103 } 00:07:13.103 Got JSON-RPC error response 00:07:13.103 response: 00:07:13.103 { 00:07:13.103 "code": -32602, 00:07:13.103 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:13.103 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:13.103 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:13.103 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2382 00:07:13.360 [2024-07-15 10:24:01.871871] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2382: invalid model number 'SPDK_Controller' 00:07:13.360 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:13.360 { 00:07:13.360 "nqn": "nqn.2016-06.io.spdk:cnode2382", 00:07:13.360 "model_number": "SPDK_Controller\u001f", 00:07:13.360 "method": "nvmf_create_subsystem", 00:07:13.360 "req_id": 1 00:07:13.360 } 00:07:13.360 Got JSON-RPC error response 00:07:13.360 response: 00:07:13.361 { 00:07:13.361 "code": -32602, 00:07:13.361 "message": "Invalid MN SPDK_Controller\u001f" 00:07:13.361 }' 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:13.361 { 00:07:13.361 "nqn": "nqn.2016-06.io.spdk:cnode2382", 00:07:13.361 "model_number": "SPDK_Controller\u001f", 00:07:13.361 "method": "nvmf_create_subsystem", 00:07:13.361 "req_id": 1 00:07:13.361 } 00:07:13.361 Got JSON-RPC error response 00:07:13.361 response: 00:07:13.361 { 00:07:13.361 "code": -32602, 00:07:13.361 "message": "Invalid MN SPDK_Controller\u001f" 00:07:13.361 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.361 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.619 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.620 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:07:13.620 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nbyl`sk/aHr.Mi:L~NK;a' 00:07:13.620 10:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'nbyl`sk/aHr.Mi:L~NK;a' nqn.2016-06.io.spdk:cnode7333 00:07:13.879 [2024-07-15 10:24:02.196974] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7333: invalid serial number 'nbyl`sk/aHr.Mi:L~NK;a' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:13.879 { 00:07:13.879 "nqn": "nqn.2016-06.io.spdk:cnode7333", 00:07:13.879 "serial_number": "nbyl`sk/aHr.Mi:L~NK;a", 00:07:13.879 "method": "nvmf_create_subsystem", 00:07:13.879 "req_id": 1 00:07:13.879 } 00:07:13.879 Got JSON-RPC error response 00:07:13.879 response: 00:07:13.879 { 00:07:13.879 "code": -32602, 00:07:13.879 "message": "Invalid SN nbyl`sk/aHr.Mi:L~NK;a" 00:07:13.879 }' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:13.879 { 00:07:13.879 "nqn": "nqn.2016-06.io.spdk:cnode7333", 00:07:13.879 "serial_number": "nbyl`sk/aHr.Mi:L~NK;a", 00:07:13.879 "method": "nvmf_create_subsystem", 00:07:13.879 "req_id": 1 00:07:13.879 } 00:07:13.879 Got JSON-RPC error response 00:07:13.879 response: 00:07:13.879 { 00:07:13.879 "code": -32602, 00:07:13.879 "message": "Invalid SN nbyl`sk/aHr.Mi:L~NK;a" 00:07:13.879 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:13.879 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:13.880 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Rwd/k0mgj]sNZ!)+?[SJw`"1auD&HhgW3IdTsBiWx' 00:07:13.881 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Rwd/k0mgj]sNZ!)+?[SJw`"1auD&HhgW3IdTsBiWx' nqn.2016-06.io.spdk:cnode4598 00:07:14.139 [2024-07-15 10:24:02.590250] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4598: invalid model number 'Rwd/k0mgj]sNZ!)+?[SJw`"1auD&HhgW3IdTsBiWx' 00:07:14.139 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:14.139 { 00:07:14.139 "nqn": "nqn.2016-06.io.spdk:cnode4598", 00:07:14.139 "model_number": "Rwd/k0mgj]sNZ!)+?[SJw`\"1auD&HhgW3IdTsBiWx", 00:07:14.139 "method": "nvmf_create_subsystem", 00:07:14.139 "req_id": 1 00:07:14.139 } 00:07:14.139 Got JSON-RPC error response 00:07:14.139 response: 00:07:14.139 { 00:07:14.139 "code": -32602, 00:07:14.139 "message": "Invalid MN Rwd/k0mgj]sNZ!)+?[SJw`\"1auD&HhgW3IdTsBiWx" 00:07:14.139 }' 00:07:14.139 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:14.139 { 00:07:14.139 "nqn": "nqn.2016-06.io.spdk:cnode4598", 00:07:14.139 "model_number": "Rwd/k0mgj]sNZ!)+?[SJw`\"1auD&HhgW3IdTsBiWx", 00:07:14.139 "method": "nvmf_create_subsystem", 00:07:14.139 "req_id": 1 00:07:14.139 } 00:07:14.139 Got JSON-RPC error response 00:07:14.139 response: 00:07:14.139 { 00:07:14.139 "code": -32602, 00:07:14.139 "message": "Invalid MN Rwd/k0mgj]sNZ!)+?[SJw`\"1auD&HhgW3IdTsBiWx" 00:07:14.139 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:14.139 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:14.396 [2024-07-15 10:24:02.851254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.396 10:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:14.660 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:14.660 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:14.660 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:14.660 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:14.660 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:14.966 [2024-07-15 10:24:03.360901] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:14.966 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:14.966 { 00:07:14.966 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:14.966 "listen_address": { 00:07:14.966 "trtype": "tcp", 00:07:14.966 "traddr": "", 00:07:14.966 "trsvcid": "4421" 00:07:14.966 }, 00:07:14.966 "method": "nvmf_subsystem_remove_listener", 00:07:14.966 "req_id": 1 00:07:14.966 } 00:07:14.966 Got JSON-RPC error response 00:07:14.966 response: 00:07:14.966 { 00:07:14.966 "code": -32602, 00:07:14.966 "message": "Invalid parameters" 00:07:14.966 }' 00:07:14.966 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:14.966 { 00:07:14.966 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:14.966 "listen_address": { 00:07:14.966 "trtype": "tcp", 00:07:14.966 "traddr": "", 00:07:14.966 "trsvcid": "4421" 00:07:14.966 }, 00:07:14.966 "method": "nvmf_subsystem_remove_listener", 00:07:14.966 "req_id": 1 00:07:14.966 } 00:07:14.966 Got JSON-RPC error response 00:07:14.966 response: 00:07:14.966 { 00:07:14.966 "code": -32602, 00:07:14.966 "message": "Invalid parameters" 00:07:14.966 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:14.966 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3861 -i 0 00:07:15.225 [2024-07-15 10:24:03.605652] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3861: invalid cntlid range [0-65519] 00:07:15.225 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:15.225 { 00:07:15.225 "nqn": "nqn.2016-06.io.spdk:cnode3861", 00:07:15.225 "min_cntlid": 0, 00:07:15.225 "method": "nvmf_create_subsystem", 00:07:15.225 "req_id": 1 00:07:15.225 } 00:07:15.225 Got JSON-RPC error response 00:07:15.225 response: 00:07:15.225 { 00:07:15.225 "code": -32602, 00:07:15.225 "message": "Invalid cntlid range [0-65519]" 00:07:15.225 }' 00:07:15.225 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:15.225 { 00:07:15.225 "nqn": "nqn.2016-06.io.spdk:cnode3861", 00:07:15.225 "min_cntlid": 0, 00:07:15.225 "method": "nvmf_create_subsystem", 00:07:15.225 "req_id": 1 00:07:15.225 } 00:07:15.225 Got JSON-RPC error response 00:07:15.225 response: 00:07:15.225 { 00:07:15.226 "code": -32602, 00:07:15.226 "message": "Invalid cntlid range [0-65519]" 00:07:15.226 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:15.226 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5063 -i 65520 00:07:15.483 [2024-07-15 10:24:03.854461] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5063: invalid cntlid range [65520-65519] 00:07:15.483 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:15.483 { 00:07:15.483 "nqn": "nqn.2016-06.io.spdk:cnode5063", 00:07:15.483 "min_cntlid": 65520, 00:07:15.483 "method": "nvmf_create_subsystem", 00:07:15.483 "req_id": 1 00:07:15.483 } 00:07:15.483 Got JSON-RPC error response 00:07:15.483 response: 00:07:15.483 { 00:07:15.483 "code": -32602, 00:07:15.483 "message": "Invalid cntlid range [65520-65519]" 00:07:15.483 }' 00:07:15.483 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:15.483 { 00:07:15.483 "nqn": "nqn.2016-06.io.spdk:cnode5063", 00:07:15.483 "min_cntlid": 65520, 00:07:15.483 "method": "nvmf_create_subsystem", 00:07:15.483 "req_id": 1 00:07:15.483 } 00:07:15.483 Got JSON-RPC error response 00:07:15.483 response: 00:07:15.483 { 00:07:15.483 "code": -32602, 00:07:15.483 "message": "Invalid cntlid range [65520-65519]" 00:07:15.483 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:15.483 10:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17068 -I 0 00:07:15.741 [2024-07-15 10:24:04.115364] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17068: invalid cntlid range [1-0] 00:07:15.741 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:15.741 { 00:07:15.741 "nqn": "nqn.2016-06.io.spdk:cnode17068", 00:07:15.741 "max_cntlid": 0, 00:07:15.741 "method": "nvmf_create_subsystem", 00:07:15.741 "req_id": 1 00:07:15.741 } 00:07:15.741 Got JSON-RPC error response 00:07:15.741 response: 00:07:15.741 { 00:07:15.741 "code": -32602, 00:07:15.741 "message": "Invalid cntlid range [1-0]" 00:07:15.741 }' 00:07:15.741 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:15.741 { 00:07:15.741 "nqn": "nqn.2016-06.io.spdk:cnode17068", 00:07:15.741 "max_cntlid": 0, 00:07:15.741 "method": "nvmf_create_subsystem", 00:07:15.741 "req_id": 1 00:07:15.741 } 00:07:15.741 Got JSON-RPC error response 00:07:15.741 response: 00:07:15.741 { 00:07:15.741 "code": -32602, 00:07:15.741 "message": "Invalid cntlid range [1-0]" 00:07:15.741 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:15.741 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27795 -I 65520 00:07:15.999 [2024-07-15 10:24:04.372149] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27795: invalid cntlid range [1-65520] 00:07:15.999 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:15.999 { 00:07:15.999 "nqn": "nqn.2016-06.io.spdk:cnode27795", 00:07:15.999 "max_cntlid": 65520, 00:07:15.999 "method": "nvmf_create_subsystem", 00:07:15.999 "req_id": 1 00:07:15.999 } 00:07:15.999 Got JSON-RPC error response 00:07:15.999 response: 00:07:15.999 { 00:07:15.999 "code": -32602, 00:07:15.999 "message": "Invalid cntlid range [1-65520]" 00:07:15.999 }' 00:07:15.999 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:15.999 { 00:07:15.999 "nqn": "nqn.2016-06.io.spdk:cnode27795", 00:07:15.999 "max_cntlid": 65520, 00:07:15.999 "method": "nvmf_create_subsystem", 00:07:15.999 "req_id": 1 00:07:15.999 } 00:07:15.999 Got JSON-RPC error response 00:07:15.999 response: 00:07:15.999 { 00:07:15.999 "code": -32602, 00:07:15.999 "message": "Invalid cntlid range [1-65520]" 00:07:15.999 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:15.999 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13453 -i 6 -I 5 00:07:16.257 [2024-07-15 10:24:04.612946] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13453: invalid cntlid range [6-5] 00:07:16.257 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:16.257 { 00:07:16.257 "nqn": "nqn.2016-06.io.spdk:cnode13453", 00:07:16.257 "min_cntlid": 6, 00:07:16.257 "max_cntlid": 5, 00:07:16.257 "method": "nvmf_create_subsystem", 00:07:16.257 "req_id": 1 00:07:16.257 } 00:07:16.257 Got JSON-RPC error response 00:07:16.257 response: 00:07:16.257 { 00:07:16.257 "code": -32602, 00:07:16.257 "message": "Invalid cntlid range [6-5]" 00:07:16.257 }' 00:07:16.257 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:16.257 { 00:07:16.257 "nqn": "nqn.2016-06.io.spdk:cnode13453", 00:07:16.257 "min_cntlid": 6, 00:07:16.257 "max_cntlid": 5, 00:07:16.257 "method": "nvmf_create_subsystem", 00:07:16.257 "req_id": 1 00:07:16.257 } 00:07:16.257 Got JSON-RPC error response 00:07:16.257 response: 00:07:16.257 { 00:07:16.257 "code": -32602, 00:07:16.257 "message": "Invalid cntlid range [6-5]" 00:07:16.257 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:16.257 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:16.257 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:16.257 { 00:07:16.257 "name": "foobar", 00:07:16.257 "method": "nvmf_delete_target", 00:07:16.257 "req_id": 1 00:07:16.257 } 00:07:16.257 Got JSON-RPC error response 00:07:16.257 response: 00:07:16.257 { 00:07:16.257 "code": -32602, 00:07:16.257 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:16.258 }' 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:16.258 { 00:07:16.258 "name": "foobar", 00:07:16.258 "method": "nvmf_delete_target", 00:07:16.258 "req_id": 1 00:07:16.258 } 00:07:16.258 Got JSON-RPC error response 00:07:16.258 response: 00:07:16.258 { 00:07:16.258 "code": -32602, 00:07:16.258 "message": "The specified target doesn't exist, cannot delete it." 00:07:16.258 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.258 rmmod nvme_tcp 00:07:16.258 rmmod nvme_fabrics 00:07:16.258 rmmod nvme_keyring 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1117566 ']' 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1117566 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1117566 ']' 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1117566 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.258 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117566 00:07:16.516 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.516 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.516 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117566' 00:07:16.516 killing process with pid 1117566 00:07:16.516 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1117566 00:07:16.516 10:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1117566 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.774 10:24:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.683 10:24:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.683 00:07:18.683 real 0m8.638s 00:07:18.683 user 0m19.962s 00:07:18.683 sys 0m2.419s 00:07:18.683 10:24:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.683 10:24:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.683 ************************************ 00:07:18.684 END TEST nvmf_invalid 00:07:18.684 ************************************ 00:07:18.684 10:24:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:18.684 10:24:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:18.684 10:24:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.684 10:24:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.684 10:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.684 ************************************ 00:07:18.684 START TEST nvmf_abort 00:07:18.684 ************************************ 00:07:18.684 10:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:18.942 * Looking for test storage... 00:07:18.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.942 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.943 10:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:20.842 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:20.842 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.842 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:20.843 Found net devices under 0000:09:00.0: cvl_0_0 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:20.843 Found net devices under 0000:09:00.1: cvl_0_1 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.843 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:21.100 00:07:21.100 --- 10.0.0.2 ping statistics --- 00:07:21.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.100 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:21.100 00:07:21.100 --- 10.0.0.1 ping statistics --- 00:07:21.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.100 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1120762 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1120762 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1120762 ']' 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.100 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.100 [2024-07-15 10:24:09.480388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.100 [2024-07-15 10:24:09.480468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.100 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.100 [2024-07-15 10:24:09.541221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.100 [2024-07-15 10:24:09.647350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.100 [2024-07-15 10:24:09.647411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.100 [2024-07-15 10:24:09.647425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.100 [2024-07-15 10:24:09.647435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.100 [2024-07-15 10:24:09.647459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.100 [2024-07-15 10:24:09.647551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.100 [2024-07-15 10:24:09.647661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.100 [2024-07-15 10:24:09.647664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 [2024-07-15 10:24:09.785844] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 Malloc0 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 Delay0 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 [2024-07-15 10:24:09.863735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 10:24:09 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:21.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.615 [2024-07-15 10:24:09.928003] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:24.137 Initializing NVMe Controllers 00:07:24.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.137 controller IO queue size 128 less than required 00:07:24.137 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:24.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:24.137 Initialization complete. Launching workers. 00:07:24.137 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32108 00:07:24.137 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32169, failed to submit 62 00:07:24.137 success 32112, unsuccess 57, failed 0 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.137 rmmod nvme_tcp 00:07:24.137 rmmod nvme_fabrics 00:07:24.137 rmmod nvme_keyring 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1120762 ']' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1120762 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1120762 ']' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1120762 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120762 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120762' 00:07:24.137 killing process with pid 1120762 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1120762 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1120762 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.137 10:24:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.044 10:24:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.044 00:07:26.044 real 0m7.356s 00:07:26.044 user 0m10.531s 00:07:26.044 sys 0m2.690s 00:07:26.044 10:24:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.044 10:24:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:26.044 ************************************ 00:07:26.044 END TEST nvmf_abort 00:07:26.044 ************************************ 00:07:26.044 10:24:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:26.044 10:24:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:26.044 10:24:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.044 10:24:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.044 10:24:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.303 ************************************ 00:07:26.303 START TEST nvmf_ns_hotplug_stress 00:07:26.303 ************************************ 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:26.303 * Looking for test storage... 00:07:26.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.303 10:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:28.837 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:28.837 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.837 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:28.838 Found net devices under 0000:09:00.0: cvl_0_0 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:28.838 Found net devices under 0000:09:00.1: cvl_0_1 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:07:28.838 00:07:28.838 --- 10.0.0.2 ping statistics --- 00:07:28.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.838 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:07:28.838 00:07:28.838 --- 10.0.0.1 ping statistics --- 00:07:28.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.838 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1123109 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1123109 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1123109 ']' 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.838 10:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.838 [2024-07-15 10:24:16.999740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:28.838 [2024-07-15 10:24:16.999827] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.838 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.838 [2024-07-15 10:24:17.063771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.838 [2024-07-15 10:24:17.164311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.838 [2024-07-15 10:24:17.164378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.838 [2024-07-15 10:24:17.164404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.838 [2024-07-15 10:24:17.164416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.838 [2024-07-15 10:24:17.164426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.838 [2024-07-15 10:24:17.164507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.838 [2024-07-15 10:24:17.164611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.838 [2024-07-15 10:24:17.164619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:28.838 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.096 [2024-07-15 10:24:17.577434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.096 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:29.354 10:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.612 [2024-07-15 10:24:18.084189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.612 10:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.869 10:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:30.128 Malloc0 00:07:30.128 10:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.384 Delay0 00:07:30.384 10:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.640 10:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:30.897 NULL1 00:07:30.898 10:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:31.154 10:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1123409 00:07:31.154 10:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:31.154 10:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:31.154 10:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.525 Read completed with error (sct=0, sc=11) 00:07:32.525 10:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.782 10:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:32.782 10:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:33.039 true 00:07:33.039 10:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:33.039 10:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.604 10:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.861 10:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:33.861 10:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:34.119 true 00:07:34.377 10:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:34.377 10:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.377 10:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.635 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:34.635 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:34.897 true 00:07:34.897 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:34.897 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.154 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.411 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:35.411 10:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:35.668 true 00:07:35.668 10:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:35.668 10:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.599 10:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.856 10:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:36.856 10:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:37.112 true 00:07:37.112 10:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:37.112 10:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.369 10:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.626 10:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:37.626 10:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:37.883 true 00:07:37.883 10:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:37.883 10:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.813 10:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.069 10:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:39.069 10:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:39.327 true 00:07:39.327 10:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:39.327 10:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.583 10:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.583 10:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:39.583 10:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:39.839 true 00:07:39.839 10:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:39.839 10:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.208 10:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.208 10:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:41.208 10:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:41.472 true 00:07:41.472 10:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:41.472 10:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.780 10:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.061 10:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:42.061 10:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:42.061 true 00:07:42.061 10:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:42.061 10:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.992 10:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.248 10:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:43.249 10:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:43.506 true 00:07:43.506 10:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:43.506 10:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.762 10:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.019 10:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:44.019 10:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:44.275 true 00:07:44.275 10:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:44.275 10:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.249 10:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.506 10:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:45.506 10:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:45.506 true 00:07:45.506 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:45.506 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.763 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.020 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:46.020 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:46.278 true 00:07:46.278 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:46.278 10:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.211 10:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.776 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:47.776 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:47.776 true 00:07:47.776 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:47.776 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.034 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.291 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:48.291 10:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:48.548 true 00:07:48.548 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:48.548 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.805 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.062 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:49.062 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:49.319 true 00:07:49.319 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:49.319 10:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.691 10:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.691 10:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:50.691 10:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:50.948 true 00:07:50.948 10:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:50.948 10:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.514 10:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.514 10:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:51.514 10:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:51.772 true 00:07:51.772 10:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:51.772 10:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.705 10:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.962 10:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:52.962 10:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:53.219 true 00:07:53.220 10:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:53.220 10:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.477 10:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.477 10:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:53.477 10:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:53.734 true 00:07:53.734 10:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:53.734 10:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.667 10:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.923 10:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:54.923 10:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:55.180 true 00:07:55.180 10:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:55.180 10:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.437 10:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.694 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:55.695 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:55.952 true 00:07:55.952 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:55.952 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.209 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.466 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:56.466 10:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:56.724 true 00:07:56.724 10:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:56.724 10:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.096 10:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.096 10:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:58.096 10:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:58.353 true 00:07:58.353 10:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:58.353 10:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.611 10:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.869 10:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:58.869 10:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:59.127 true 00:07:59.127 10:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:07:59.127 10:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.060 10:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.316 10:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:00.316 10:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:00.573 true 00:08:00.573 10:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:08:00.573 10:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.829 10:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.085 10:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:01.085 10:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:01.341 true 00:08:01.341 10:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:08:01.341 10:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.272 Initializing NVMe Controllers 00:08:02.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:02.272 Controller IO queue size 128, less than required. 00:08:02.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.272 Controller IO queue size 128, less than required. 00:08:02.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:02.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:02.272 Initialization complete. Launching workers. 00:08:02.272 ======================================================== 00:08:02.272 Latency(us) 00:08:02.272 Device Information : IOPS MiB/s Average min max 00:08:02.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 854.27 0.42 78511.75 2503.69 1012340.68 00:08:02.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10424.57 5.09 12278.73 3404.50 537608.83 00:08:02.272 ======================================================== 00:08:02.272 Total : 11278.83 5.51 17295.27 2503.69 1012340.68 00:08:02.272 00:08:02.272 10:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.529 10:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:02.529 10:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:02.785 true 00:08:02.785 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1123409 00:08:02.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1123409) - No such process 00:08:02.785 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1123409 00:08:02.785 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.042 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.300 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:03.300 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:03.300 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:03.300 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.300 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:03.558 null0 00:08:03.558 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.558 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.558 10:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:03.815 null1 00:08:03.815 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.815 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.815 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:04.073 null2 00:08:04.073 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.073 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.073 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:04.331 null3 00:08:04.331 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.331 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.331 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:04.589 null4 00:08:04.589 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.589 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.589 10:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:04.589 null5 00:08:04.846 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.846 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.846 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:04.846 null6 00:08:04.846 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.846 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.846 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:05.103 null7 00:08:05.103 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.103 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.103 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:05.103 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.103 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.103 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1127592 1127593 1127595 1127597 1127599 1127601 1127603 1127605 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.361 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.619 10:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.876 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.134 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 10:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.650 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.907 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.166 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.424 10:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.682 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.940 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.197 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.454 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.455 10:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.712 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.712 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.712 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.712 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.712 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.712 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.970 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.970 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.970 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.970 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.970 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.228 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.485 10:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.743 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.000 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.000 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.001 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.001 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.001 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.001 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.001 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.001 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.259 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.517 10:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.774 rmmod nvme_tcp 00:08:10.774 rmmod nvme_fabrics 00:08:10.774 rmmod nvme_keyring 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1123109 ']' 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1123109 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1123109 ']' 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1123109 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123109 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123109' 00:08:10.774 killing process with pid 1123109 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1123109 00:08:10.774 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1123109 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.033 10:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.582 10:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.582 00:08:13.582 real 0m47.025s 00:08:13.582 user 3m33.106s 00:08:13.582 sys 0m16.543s 00:08:13.582 10:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.582 10:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.582 ************************************ 00:08:13.582 END TEST nvmf_ns_hotplug_stress 00:08:13.582 ************************************ 00:08:13.582 10:25:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:13.582 10:25:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:13.582 10:25:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.582 10:25:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.582 10:25:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.582 ************************************ 00:08:13.582 START TEST nvmf_connect_stress 00:08:13.582 ************************************ 00:08:13.582 10:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:13.582 * Looking for test storage... 00:08:13.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.582 10:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.582 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:13.582 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.582 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.583 10:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:15.483 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:15.483 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.483 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:15.484 Found net devices under 0000:09:00.0: cvl_0_0 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:15.484 Found net devices under 0000:09:00.1: cvl_0_1 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:08:15.484 00:08:15.484 --- 10.0.0.2 ping statistics --- 00:08:15.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.484 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:08:15.484 00:08:15.484 --- 10.0.0.1 ping statistics --- 00:08:15.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.484 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.484 10:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1130354 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1130354 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1130354 ']' 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.484 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.742 [2024-07-15 10:25:04.047815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:15.742 [2024-07-15 10:25:04.047903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.742 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.742 [2024-07-15 10:25:04.110268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.742 [2024-07-15 10:25:04.221162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.742 [2024-07-15 10:25:04.221214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.742 [2024-07-15 10:25:04.221242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.742 [2024-07-15 10:25:04.221260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.742 [2024-07-15 10:25:04.221270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.742 [2024-07-15 10:25:04.221415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.742 [2024-07-15 10:25:04.221448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.742 [2024-07-15 10:25:04.221450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.001 [2024-07-15 10:25:04.368048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.001 [2024-07-15 10:25:04.399938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.001 NULL1 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1130377 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.001 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.259 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.259 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:16.259 10:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.259 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.259 10:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.824 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.824 10:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:16.824 10:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.824 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.824 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.081 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.081 10:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:17.081 10:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.081 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.081 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.339 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.339 10:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:17.339 10:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.339 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.339 10:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.620 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.620 10:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:17.620 10:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.620 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.620 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.913 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.913 10:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:17.913 10:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.913 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.913 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.171 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.171 10:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:18.171 10:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.171 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.171 10:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.734 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.734 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:18.734 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.734 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.734 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:18.992 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.992 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.275 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.275 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:19.275 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.275 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.275 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.533 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.533 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:19.533 10:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.533 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.533 10:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.790 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.790 10:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:19.790 10:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.790 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.790 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.354 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.354 10:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:20.354 10:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.354 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.354 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.611 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.611 10:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:20.611 10:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.611 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.611 10:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.868 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.868 10:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:20.868 10:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.868 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.868 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.125 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.125 10:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:21.125 10:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.125 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.125 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.382 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.382 10:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:21.382 10:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.382 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.382 10:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.945 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.945 10:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:21.945 10:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.945 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.945 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.202 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.202 10:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:22.202 10:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.202 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.202 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.459 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.459 10:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:22.459 10:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.459 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.459 10:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.716 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.716 10:25:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:22.716 10:25:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.716 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.716 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.280 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.280 10:25:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:23.280 10:25:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.280 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.280 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.537 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.538 10:25:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:23.538 10:25:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.538 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.538 10:25:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.795 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.795 10:25:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:23.795 10:25:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.795 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.795 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.052 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.052 10:25:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:24.052 10:25:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.052 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.052 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.309 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.309 10:25:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:24.309 10:25:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.309 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.309 10:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.880 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.880 10:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:24.880 10:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.880 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.880 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.140 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.140 10:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:25.140 10:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.140 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.140 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.397 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.397 10:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:25.397 10:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.397 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.397 10:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.653 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.653 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:25.653 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.653 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.653 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.910 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:25.910 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.910 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.910 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:26.426 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.426 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1130377 00:08:26.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1130377) - No such process 00:08:26.426 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1130377 00:08:26.426 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.427 rmmod nvme_tcp 00:08:26.427 rmmod nvme_fabrics 00:08:26.427 rmmod nvme_keyring 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1130354 ']' 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1130354 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1130354 ']' 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1130354 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1130354 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1130354' 00:08:26.427 killing process with pid 1130354 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1130354 00:08:26.427 10:25:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1130354 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.684 10:25:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.589 10:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.589 00:08:28.589 real 0m15.456s 00:08:28.589 user 0m38.558s 00:08:28.589 sys 0m5.855s 00:08:28.589 10:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.589 10:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.589 ************************************ 00:08:28.589 END TEST nvmf_connect_stress 00:08:28.589 ************************************ 00:08:28.848 10:25:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:28.848 10:25:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:28.848 10:25:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.848 10:25:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.848 10:25:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.848 ************************************ 00:08:28.848 START TEST nvmf_fused_ordering 00:08:28.848 ************************************ 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:28.848 * Looking for test storage... 00:08:28.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.848 10:25:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:31.376 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:31.376 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.376 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:31.377 Found net devices under 0000:09:00.0: cvl_0_0 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:31.377 Found net devices under 0000:09:00.1: cvl_0_1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:08:31.377 00:08:31.377 --- 10.0.0.2 ping statistics --- 00:08:31.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.377 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:08:31.377 00:08:31.377 --- 10.0.0.1 ping statistics --- 00:08:31.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.377 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1133543 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1133543 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1133543 ']' 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 [2024-07-15 10:25:19.558948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.377 [2024-07-15 10:25:19.559029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.377 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.377 [2024-07-15 10:25:19.620368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.377 [2024-07-15 10:25:19.724928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.377 [2024-07-15 10:25:19.724983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.377 [2024-07-15 10:25:19.725014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.377 [2024-07-15 10:25:19.725026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.377 [2024-07-15 10:25:19.725037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.377 [2024-07-15 10:25:19.725062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 [2024-07-15 10:25:19.860691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 [2024-07-15 10:25:19.876882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 NULL1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.377 10:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:31.377 [2024-07-15 10:25:19.920415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.378 [2024-07-15 10:25:19.920450] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133677 ] 00:08:31.635 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.894 Attached to nqn.2016-06.io.spdk:cnode1 00:08:31.894 Namespace ID: 1 size: 1GB 00:08:31.894 fused_ordering(0) 00:08:31.894 fused_ordering(1) 00:08:31.894 fused_ordering(2) 00:08:31.894 fused_ordering(3) 00:08:31.894 fused_ordering(4) 00:08:31.894 fused_ordering(5) 00:08:31.894 fused_ordering(6) 00:08:31.894 fused_ordering(7) 00:08:31.894 fused_ordering(8) 00:08:31.894 fused_ordering(9) 00:08:31.894 fused_ordering(10) 00:08:31.894 fused_ordering(11) 00:08:31.894 fused_ordering(12) 00:08:31.894 fused_ordering(13) 00:08:31.894 fused_ordering(14) 00:08:31.894 fused_ordering(15) 00:08:31.894 fused_ordering(16) 00:08:31.894 fused_ordering(17) 00:08:31.894 fused_ordering(18) 00:08:31.894 fused_ordering(19) 00:08:31.894 fused_ordering(20) 00:08:31.894 fused_ordering(21) 00:08:31.894 fused_ordering(22) 00:08:31.894 fused_ordering(23) 00:08:31.894 fused_ordering(24) 00:08:31.894 fused_ordering(25) 00:08:31.894 fused_ordering(26) 00:08:31.894 fused_ordering(27) 00:08:31.894 fused_ordering(28) 00:08:31.894 fused_ordering(29) 00:08:31.894 fused_ordering(30) 00:08:31.894 fused_ordering(31) 00:08:31.894 fused_ordering(32) 00:08:31.894 fused_ordering(33) 00:08:31.894 fused_ordering(34) 00:08:31.894 fused_ordering(35) 00:08:31.894 fused_ordering(36) 00:08:31.894 fused_ordering(37) 00:08:31.894 fused_ordering(38) 00:08:31.894 fused_ordering(39) 00:08:31.894 fused_ordering(40) 00:08:31.894 fused_ordering(41) 00:08:31.894 fused_ordering(42) 00:08:31.894 fused_ordering(43) 00:08:31.894 fused_ordering(44) 00:08:31.894 fused_ordering(45) 00:08:31.894 fused_ordering(46) 00:08:31.894 fused_ordering(47) 00:08:31.894 fused_ordering(48) 00:08:31.894 fused_ordering(49) 00:08:31.894 fused_ordering(50) 00:08:31.894 fused_ordering(51) 00:08:31.894 fused_ordering(52) 00:08:31.894 fused_ordering(53) 00:08:31.894 fused_ordering(54) 00:08:31.894 fused_ordering(55) 00:08:31.894 fused_ordering(56) 00:08:31.894 fused_ordering(57) 00:08:31.894 fused_ordering(58) 00:08:31.894 fused_ordering(59) 00:08:31.894 fused_ordering(60) 00:08:31.894 fused_ordering(61) 00:08:31.894 fused_ordering(62) 00:08:31.894 fused_ordering(63) 00:08:31.894 fused_ordering(64) 00:08:31.894 fused_ordering(65) 00:08:31.894 fused_ordering(66) 00:08:31.894 fused_ordering(67) 00:08:31.894 fused_ordering(68) 00:08:31.894 fused_ordering(69) 00:08:31.894 fused_ordering(70) 00:08:31.894 fused_ordering(71) 00:08:31.894 fused_ordering(72) 00:08:31.894 fused_ordering(73) 00:08:31.894 fused_ordering(74) 00:08:31.894 fused_ordering(75) 00:08:31.894 fused_ordering(76) 00:08:31.894 fused_ordering(77) 00:08:31.894 fused_ordering(78) 00:08:31.894 fused_ordering(79) 00:08:31.894 fused_ordering(80) 00:08:31.894 fused_ordering(81) 00:08:31.894 fused_ordering(82) 00:08:31.894 fused_ordering(83) 00:08:31.894 fused_ordering(84) 00:08:31.894 fused_ordering(85) 00:08:31.894 fused_ordering(86) 00:08:31.894 fused_ordering(87) 00:08:31.894 fused_ordering(88) 00:08:31.894 fused_ordering(89) 00:08:31.894 fused_ordering(90) 00:08:31.894 fused_ordering(91) 00:08:31.894 fused_ordering(92) 00:08:31.894 fused_ordering(93) 00:08:31.894 fused_ordering(94) 00:08:31.894 fused_ordering(95) 00:08:31.894 fused_ordering(96) 00:08:31.894 fused_ordering(97) 00:08:31.894 fused_ordering(98) 00:08:31.894 fused_ordering(99) 00:08:31.894 fused_ordering(100) 00:08:31.894 fused_ordering(101) 00:08:31.894 fused_ordering(102) 00:08:31.894 fused_ordering(103) 00:08:31.894 fused_ordering(104) 00:08:31.894 fused_ordering(105) 00:08:31.894 fused_ordering(106) 00:08:31.894 fused_ordering(107) 00:08:31.894 fused_ordering(108) 00:08:31.894 fused_ordering(109) 00:08:31.894 fused_ordering(110) 00:08:31.894 fused_ordering(111) 00:08:31.894 fused_ordering(112) 00:08:31.894 fused_ordering(113) 00:08:31.894 fused_ordering(114) 00:08:31.894 fused_ordering(115) 00:08:31.894 fused_ordering(116) 00:08:31.894 fused_ordering(117) 00:08:31.894 fused_ordering(118) 00:08:31.894 fused_ordering(119) 00:08:31.894 fused_ordering(120) 00:08:31.894 fused_ordering(121) 00:08:31.894 fused_ordering(122) 00:08:31.894 fused_ordering(123) 00:08:31.894 fused_ordering(124) 00:08:31.894 fused_ordering(125) 00:08:31.894 fused_ordering(126) 00:08:31.894 fused_ordering(127) 00:08:31.894 fused_ordering(128) 00:08:31.894 fused_ordering(129) 00:08:31.894 fused_ordering(130) 00:08:31.894 fused_ordering(131) 00:08:31.894 fused_ordering(132) 00:08:31.894 fused_ordering(133) 00:08:31.894 fused_ordering(134) 00:08:31.894 fused_ordering(135) 00:08:31.894 fused_ordering(136) 00:08:31.894 fused_ordering(137) 00:08:31.894 fused_ordering(138) 00:08:31.894 fused_ordering(139) 00:08:31.894 fused_ordering(140) 00:08:31.894 fused_ordering(141) 00:08:31.894 fused_ordering(142) 00:08:31.894 fused_ordering(143) 00:08:31.894 fused_ordering(144) 00:08:31.894 fused_ordering(145) 00:08:31.894 fused_ordering(146) 00:08:31.894 fused_ordering(147) 00:08:31.894 fused_ordering(148) 00:08:31.894 fused_ordering(149) 00:08:31.894 fused_ordering(150) 00:08:31.894 fused_ordering(151) 00:08:31.894 fused_ordering(152) 00:08:31.894 fused_ordering(153) 00:08:31.894 fused_ordering(154) 00:08:31.894 fused_ordering(155) 00:08:31.894 fused_ordering(156) 00:08:31.894 fused_ordering(157) 00:08:31.894 fused_ordering(158) 00:08:31.894 fused_ordering(159) 00:08:31.894 fused_ordering(160) 00:08:31.894 fused_ordering(161) 00:08:31.894 fused_ordering(162) 00:08:31.894 fused_ordering(163) 00:08:31.894 fused_ordering(164) 00:08:31.894 fused_ordering(165) 00:08:31.894 fused_ordering(166) 00:08:31.894 fused_ordering(167) 00:08:31.894 fused_ordering(168) 00:08:31.894 fused_ordering(169) 00:08:31.894 fused_ordering(170) 00:08:31.894 fused_ordering(171) 00:08:31.894 fused_ordering(172) 00:08:31.894 fused_ordering(173) 00:08:31.894 fused_ordering(174) 00:08:31.894 fused_ordering(175) 00:08:31.894 fused_ordering(176) 00:08:31.894 fused_ordering(177) 00:08:31.894 fused_ordering(178) 00:08:31.894 fused_ordering(179) 00:08:31.894 fused_ordering(180) 00:08:31.894 fused_ordering(181) 00:08:31.894 fused_ordering(182) 00:08:31.894 fused_ordering(183) 00:08:31.894 fused_ordering(184) 00:08:31.894 fused_ordering(185) 00:08:31.894 fused_ordering(186) 00:08:31.894 fused_ordering(187) 00:08:31.894 fused_ordering(188) 00:08:31.894 fused_ordering(189) 00:08:31.894 fused_ordering(190) 00:08:31.894 fused_ordering(191) 00:08:31.894 fused_ordering(192) 00:08:31.894 fused_ordering(193) 00:08:31.894 fused_ordering(194) 00:08:31.894 fused_ordering(195) 00:08:31.894 fused_ordering(196) 00:08:31.894 fused_ordering(197) 00:08:31.894 fused_ordering(198) 00:08:31.894 fused_ordering(199) 00:08:31.894 fused_ordering(200) 00:08:31.894 fused_ordering(201) 00:08:31.894 fused_ordering(202) 00:08:31.894 fused_ordering(203) 00:08:31.894 fused_ordering(204) 00:08:31.894 fused_ordering(205) 00:08:32.152 fused_ordering(206) 00:08:32.152 fused_ordering(207) 00:08:32.152 fused_ordering(208) 00:08:32.152 fused_ordering(209) 00:08:32.152 fused_ordering(210) 00:08:32.152 fused_ordering(211) 00:08:32.152 fused_ordering(212) 00:08:32.152 fused_ordering(213) 00:08:32.152 fused_ordering(214) 00:08:32.152 fused_ordering(215) 00:08:32.152 fused_ordering(216) 00:08:32.152 fused_ordering(217) 00:08:32.152 fused_ordering(218) 00:08:32.152 fused_ordering(219) 00:08:32.152 fused_ordering(220) 00:08:32.152 fused_ordering(221) 00:08:32.152 fused_ordering(222) 00:08:32.152 fused_ordering(223) 00:08:32.152 fused_ordering(224) 00:08:32.152 fused_ordering(225) 00:08:32.152 fused_ordering(226) 00:08:32.152 fused_ordering(227) 00:08:32.152 fused_ordering(228) 00:08:32.152 fused_ordering(229) 00:08:32.152 fused_ordering(230) 00:08:32.152 fused_ordering(231) 00:08:32.152 fused_ordering(232) 00:08:32.152 fused_ordering(233) 00:08:32.152 fused_ordering(234) 00:08:32.152 fused_ordering(235) 00:08:32.152 fused_ordering(236) 00:08:32.152 fused_ordering(237) 00:08:32.152 fused_ordering(238) 00:08:32.152 fused_ordering(239) 00:08:32.152 fused_ordering(240) 00:08:32.152 fused_ordering(241) 00:08:32.152 fused_ordering(242) 00:08:32.152 fused_ordering(243) 00:08:32.152 fused_ordering(244) 00:08:32.152 fused_ordering(245) 00:08:32.152 fused_ordering(246) 00:08:32.152 fused_ordering(247) 00:08:32.152 fused_ordering(248) 00:08:32.152 fused_ordering(249) 00:08:32.152 fused_ordering(250) 00:08:32.152 fused_ordering(251) 00:08:32.152 fused_ordering(252) 00:08:32.152 fused_ordering(253) 00:08:32.152 fused_ordering(254) 00:08:32.152 fused_ordering(255) 00:08:32.152 fused_ordering(256) 00:08:32.152 fused_ordering(257) 00:08:32.153 fused_ordering(258) 00:08:32.153 fused_ordering(259) 00:08:32.153 fused_ordering(260) 00:08:32.153 fused_ordering(261) 00:08:32.153 fused_ordering(262) 00:08:32.153 fused_ordering(263) 00:08:32.153 fused_ordering(264) 00:08:32.153 fused_ordering(265) 00:08:32.153 fused_ordering(266) 00:08:32.153 fused_ordering(267) 00:08:32.153 fused_ordering(268) 00:08:32.153 fused_ordering(269) 00:08:32.153 fused_ordering(270) 00:08:32.153 fused_ordering(271) 00:08:32.153 fused_ordering(272) 00:08:32.153 fused_ordering(273) 00:08:32.153 fused_ordering(274) 00:08:32.153 fused_ordering(275) 00:08:32.153 fused_ordering(276) 00:08:32.153 fused_ordering(277) 00:08:32.153 fused_ordering(278) 00:08:32.153 fused_ordering(279) 00:08:32.153 fused_ordering(280) 00:08:32.153 fused_ordering(281) 00:08:32.153 fused_ordering(282) 00:08:32.153 fused_ordering(283) 00:08:32.153 fused_ordering(284) 00:08:32.153 fused_ordering(285) 00:08:32.153 fused_ordering(286) 00:08:32.153 fused_ordering(287) 00:08:32.153 fused_ordering(288) 00:08:32.153 fused_ordering(289) 00:08:32.153 fused_ordering(290) 00:08:32.153 fused_ordering(291) 00:08:32.153 fused_ordering(292) 00:08:32.153 fused_ordering(293) 00:08:32.153 fused_ordering(294) 00:08:32.153 fused_ordering(295) 00:08:32.153 fused_ordering(296) 00:08:32.153 fused_ordering(297) 00:08:32.153 fused_ordering(298) 00:08:32.153 fused_ordering(299) 00:08:32.153 fused_ordering(300) 00:08:32.153 fused_ordering(301) 00:08:32.153 fused_ordering(302) 00:08:32.153 fused_ordering(303) 00:08:32.153 fused_ordering(304) 00:08:32.153 fused_ordering(305) 00:08:32.153 fused_ordering(306) 00:08:32.153 fused_ordering(307) 00:08:32.153 fused_ordering(308) 00:08:32.153 fused_ordering(309) 00:08:32.153 fused_ordering(310) 00:08:32.153 fused_ordering(311) 00:08:32.153 fused_ordering(312) 00:08:32.153 fused_ordering(313) 00:08:32.153 fused_ordering(314) 00:08:32.153 fused_ordering(315) 00:08:32.153 fused_ordering(316) 00:08:32.153 fused_ordering(317) 00:08:32.153 fused_ordering(318) 00:08:32.153 fused_ordering(319) 00:08:32.153 fused_ordering(320) 00:08:32.153 fused_ordering(321) 00:08:32.153 fused_ordering(322) 00:08:32.153 fused_ordering(323) 00:08:32.153 fused_ordering(324) 00:08:32.153 fused_ordering(325) 00:08:32.153 fused_ordering(326) 00:08:32.153 fused_ordering(327) 00:08:32.153 fused_ordering(328) 00:08:32.153 fused_ordering(329) 00:08:32.153 fused_ordering(330) 00:08:32.153 fused_ordering(331) 00:08:32.153 fused_ordering(332) 00:08:32.153 fused_ordering(333) 00:08:32.153 fused_ordering(334) 00:08:32.153 fused_ordering(335) 00:08:32.153 fused_ordering(336) 00:08:32.153 fused_ordering(337) 00:08:32.153 fused_ordering(338) 00:08:32.153 fused_ordering(339) 00:08:32.153 fused_ordering(340) 00:08:32.153 fused_ordering(341) 00:08:32.153 fused_ordering(342) 00:08:32.153 fused_ordering(343) 00:08:32.153 fused_ordering(344) 00:08:32.153 fused_ordering(345) 00:08:32.153 fused_ordering(346) 00:08:32.153 fused_ordering(347) 00:08:32.153 fused_ordering(348) 00:08:32.153 fused_ordering(349) 00:08:32.153 fused_ordering(350) 00:08:32.153 fused_ordering(351) 00:08:32.153 fused_ordering(352) 00:08:32.153 fused_ordering(353) 00:08:32.153 fused_ordering(354) 00:08:32.153 fused_ordering(355) 00:08:32.153 fused_ordering(356) 00:08:32.153 fused_ordering(357) 00:08:32.153 fused_ordering(358) 00:08:32.153 fused_ordering(359) 00:08:32.153 fused_ordering(360) 00:08:32.153 fused_ordering(361) 00:08:32.153 fused_ordering(362) 00:08:32.153 fused_ordering(363) 00:08:32.153 fused_ordering(364) 00:08:32.153 fused_ordering(365) 00:08:32.153 fused_ordering(366) 00:08:32.153 fused_ordering(367) 00:08:32.153 fused_ordering(368) 00:08:32.153 fused_ordering(369) 00:08:32.153 fused_ordering(370) 00:08:32.153 fused_ordering(371) 00:08:32.153 fused_ordering(372) 00:08:32.153 fused_ordering(373) 00:08:32.153 fused_ordering(374) 00:08:32.153 fused_ordering(375) 00:08:32.153 fused_ordering(376) 00:08:32.153 fused_ordering(377) 00:08:32.153 fused_ordering(378) 00:08:32.153 fused_ordering(379) 00:08:32.153 fused_ordering(380) 00:08:32.153 fused_ordering(381) 00:08:32.153 fused_ordering(382) 00:08:32.153 fused_ordering(383) 00:08:32.153 fused_ordering(384) 00:08:32.153 fused_ordering(385) 00:08:32.153 fused_ordering(386) 00:08:32.153 fused_ordering(387) 00:08:32.153 fused_ordering(388) 00:08:32.153 fused_ordering(389) 00:08:32.153 fused_ordering(390) 00:08:32.153 fused_ordering(391) 00:08:32.153 fused_ordering(392) 00:08:32.153 fused_ordering(393) 00:08:32.153 fused_ordering(394) 00:08:32.153 fused_ordering(395) 00:08:32.153 fused_ordering(396) 00:08:32.153 fused_ordering(397) 00:08:32.153 fused_ordering(398) 00:08:32.153 fused_ordering(399) 00:08:32.153 fused_ordering(400) 00:08:32.153 fused_ordering(401) 00:08:32.153 fused_ordering(402) 00:08:32.153 fused_ordering(403) 00:08:32.153 fused_ordering(404) 00:08:32.153 fused_ordering(405) 00:08:32.153 fused_ordering(406) 00:08:32.153 fused_ordering(407) 00:08:32.153 fused_ordering(408) 00:08:32.153 fused_ordering(409) 00:08:32.153 fused_ordering(410) 00:08:32.718 fused_ordering(411) 00:08:32.718 fused_ordering(412) 00:08:32.718 fused_ordering(413) 00:08:32.718 fused_ordering(414) 00:08:32.718 fused_ordering(415) 00:08:32.718 fused_ordering(416) 00:08:32.718 fused_ordering(417) 00:08:32.718 fused_ordering(418) 00:08:32.718 fused_ordering(419) 00:08:32.718 fused_ordering(420) 00:08:32.718 fused_ordering(421) 00:08:32.718 fused_ordering(422) 00:08:32.718 fused_ordering(423) 00:08:32.718 fused_ordering(424) 00:08:32.718 fused_ordering(425) 00:08:32.718 fused_ordering(426) 00:08:32.718 fused_ordering(427) 00:08:32.718 fused_ordering(428) 00:08:32.718 fused_ordering(429) 00:08:32.718 fused_ordering(430) 00:08:32.718 fused_ordering(431) 00:08:32.718 fused_ordering(432) 00:08:32.718 fused_ordering(433) 00:08:32.718 fused_ordering(434) 00:08:32.718 fused_ordering(435) 00:08:32.718 fused_ordering(436) 00:08:32.718 fused_ordering(437) 00:08:32.718 fused_ordering(438) 00:08:32.718 fused_ordering(439) 00:08:32.718 fused_ordering(440) 00:08:32.718 fused_ordering(441) 00:08:32.718 fused_ordering(442) 00:08:32.718 fused_ordering(443) 00:08:32.718 fused_ordering(444) 00:08:32.718 fused_ordering(445) 00:08:32.718 fused_ordering(446) 00:08:32.718 fused_ordering(447) 00:08:32.718 fused_ordering(448) 00:08:32.718 fused_ordering(449) 00:08:32.718 fused_ordering(450) 00:08:32.718 fused_ordering(451) 00:08:32.718 fused_ordering(452) 00:08:32.719 fused_ordering(453) 00:08:32.719 fused_ordering(454) 00:08:32.719 fused_ordering(455) 00:08:32.719 fused_ordering(456) 00:08:32.719 fused_ordering(457) 00:08:32.719 fused_ordering(458) 00:08:32.719 fused_ordering(459) 00:08:32.719 fused_ordering(460) 00:08:32.719 fused_ordering(461) 00:08:32.719 fused_ordering(462) 00:08:32.719 fused_ordering(463) 00:08:32.719 fused_ordering(464) 00:08:32.719 fused_ordering(465) 00:08:32.719 fused_ordering(466) 00:08:32.719 fused_ordering(467) 00:08:32.719 fused_ordering(468) 00:08:32.719 fused_ordering(469) 00:08:32.719 fused_ordering(470) 00:08:32.719 fused_ordering(471) 00:08:32.719 fused_ordering(472) 00:08:32.719 fused_ordering(473) 00:08:32.719 fused_ordering(474) 00:08:32.719 fused_ordering(475) 00:08:32.719 fused_ordering(476) 00:08:32.719 fused_ordering(477) 00:08:32.719 fused_ordering(478) 00:08:32.719 fused_ordering(479) 00:08:32.719 fused_ordering(480) 00:08:32.719 fused_ordering(481) 00:08:32.719 fused_ordering(482) 00:08:32.719 fused_ordering(483) 00:08:32.719 fused_ordering(484) 00:08:32.719 fused_ordering(485) 00:08:32.719 fused_ordering(486) 00:08:32.719 fused_ordering(487) 00:08:32.719 fused_ordering(488) 00:08:32.719 fused_ordering(489) 00:08:32.719 fused_ordering(490) 00:08:32.719 fused_ordering(491) 00:08:32.719 fused_ordering(492) 00:08:32.719 fused_ordering(493) 00:08:32.719 fused_ordering(494) 00:08:32.719 fused_ordering(495) 00:08:32.719 fused_ordering(496) 00:08:32.719 fused_ordering(497) 00:08:32.719 fused_ordering(498) 00:08:32.719 fused_ordering(499) 00:08:32.719 fused_ordering(500) 00:08:32.719 fused_ordering(501) 00:08:32.719 fused_ordering(502) 00:08:32.719 fused_ordering(503) 00:08:32.719 fused_ordering(504) 00:08:32.719 fused_ordering(505) 00:08:32.719 fused_ordering(506) 00:08:32.719 fused_ordering(507) 00:08:32.719 fused_ordering(508) 00:08:32.719 fused_ordering(509) 00:08:32.719 fused_ordering(510) 00:08:32.719 fused_ordering(511) 00:08:32.719 fused_ordering(512) 00:08:32.719 fused_ordering(513) 00:08:32.719 fused_ordering(514) 00:08:32.719 fused_ordering(515) 00:08:32.719 fused_ordering(516) 00:08:32.719 fused_ordering(517) 00:08:32.719 fused_ordering(518) 00:08:32.719 fused_ordering(519) 00:08:32.719 fused_ordering(520) 00:08:32.719 fused_ordering(521) 00:08:32.719 fused_ordering(522) 00:08:32.719 fused_ordering(523) 00:08:32.719 fused_ordering(524) 00:08:32.719 fused_ordering(525) 00:08:32.719 fused_ordering(526) 00:08:32.719 fused_ordering(527) 00:08:32.719 fused_ordering(528) 00:08:32.719 fused_ordering(529) 00:08:32.719 fused_ordering(530) 00:08:32.719 fused_ordering(531) 00:08:32.719 fused_ordering(532) 00:08:32.719 fused_ordering(533) 00:08:32.719 fused_ordering(534) 00:08:32.719 fused_ordering(535) 00:08:32.719 fused_ordering(536) 00:08:32.719 fused_ordering(537) 00:08:32.719 fused_ordering(538) 00:08:32.719 fused_ordering(539) 00:08:32.719 fused_ordering(540) 00:08:32.719 fused_ordering(541) 00:08:32.719 fused_ordering(542) 00:08:32.719 fused_ordering(543) 00:08:32.719 fused_ordering(544) 00:08:32.719 fused_ordering(545) 00:08:32.719 fused_ordering(546) 00:08:32.719 fused_ordering(547) 00:08:32.719 fused_ordering(548) 00:08:32.719 fused_ordering(549) 00:08:32.719 fused_ordering(550) 00:08:32.719 fused_ordering(551) 00:08:32.719 fused_ordering(552) 00:08:32.719 fused_ordering(553) 00:08:32.719 fused_ordering(554) 00:08:32.719 fused_ordering(555) 00:08:32.719 fused_ordering(556) 00:08:32.719 fused_ordering(557) 00:08:32.719 fused_ordering(558) 00:08:32.719 fused_ordering(559) 00:08:32.719 fused_ordering(560) 00:08:32.719 fused_ordering(561) 00:08:32.719 fused_ordering(562) 00:08:32.719 fused_ordering(563) 00:08:32.719 fused_ordering(564) 00:08:32.719 fused_ordering(565) 00:08:32.719 fused_ordering(566) 00:08:32.719 fused_ordering(567) 00:08:32.719 fused_ordering(568) 00:08:32.719 fused_ordering(569) 00:08:32.719 fused_ordering(570) 00:08:32.719 fused_ordering(571) 00:08:32.719 fused_ordering(572) 00:08:32.719 fused_ordering(573) 00:08:32.719 fused_ordering(574) 00:08:32.719 fused_ordering(575) 00:08:32.719 fused_ordering(576) 00:08:32.719 fused_ordering(577) 00:08:32.719 fused_ordering(578) 00:08:32.719 fused_ordering(579) 00:08:32.719 fused_ordering(580) 00:08:32.719 fused_ordering(581) 00:08:32.719 fused_ordering(582) 00:08:32.719 fused_ordering(583) 00:08:32.719 fused_ordering(584) 00:08:32.719 fused_ordering(585) 00:08:32.719 fused_ordering(586) 00:08:32.719 fused_ordering(587) 00:08:32.719 fused_ordering(588) 00:08:32.719 fused_ordering(589) 00:08:32.719 fused_ordering(590) 00:08:32.719 fused_ordering(591) 00:08:32.719 fused_ordering(592) 00:08:32.719 fused_ordering(593) 00:08:32.719 fused_ordering(594) 00:08:32.719 fused_ordering(595) 00:08:32.719 fused_ordering(596) 00:08:32.719 fused_ordering(597) 00:08:32.719 fused_ordering(598) 00:08:32.719 fused_ordering(599) 00:08:32.719 fused_ordering(600) 00:08:32.719 fused_ordering(601) 00:08:32.719 fused_ordering(602) 00:08:32.719 fused_ordering(603) 00:08:32.719 fused_ordering(604) 00:08:32.719 fused_ordering(605) 00:08:32.719 fused_ordering(606) 00:08:32.719 fused_ordering(607) 00:08:32.719 fused_ordering(608) 00:08:32.719 fused_ordering(609) 00:08:32.719 fused_ordering(610) 00:08:32.719 fused_ordering(611) 00:08:32.719 fused_ordering(612) 00:08:32.719 fused_ordering(613) 00:08:32.719 fused_ordering(614) 00:08:32.719 fused_ordering(615) 00:08:33.282 fused_ordering(616) 00:08:33.282 fused_ordering(617) 00:08:33.282 fused_ordering(618) 00:08:33.282 fused_ordering(619) 00:08:33.282 fused_ordering(620) 00:08:33.282 fused_ordering(621) 00:08:33.282 fused_ordering(622) 00:08:33.282 fused_ordering(623) 00:08:33.282 fused_ordering(624) 00:08:33.282 fused_ordering(625) 00:08:33.282 fused_ordering(626) 00:08:33.282 fused_ordering(627) 00:08:33.282 fused_ordering(628) 00:08:33.282 fused_ordering(629) 00:08:33.282 fused_ordering(630) 00:08:33.282 fused_ordering(631) 00:08:33.282 fused_ordering(632) 00:08:33.282 fused_ordering(633) 00:08:33.282 fused_ordering(634) 00:08:33.282 fused_ordering(635) 00:08:33.282 fused_ordering(636) 00:08:33.282 fused_ordering(637) 00:08:33.282 fused_ordering(638) 00:08:33.282 fused_ordering(639) 00:08:33.282 fused_ordering(640) 00:08:33.282 fused_ordering(641) 00:08:33.282 fused_ordering(642) 00:08:33.282 fused_ordering(643) 00:08:33.282 fused_ordering(644) 00:08:33.282 fused_ordering(645) 00:08:33.282 fused_ordering(646) 00:08:33.282 fused_ordering(647) 00:08:33.282 fused_ordering(648) 00:08:33.282 fused_ordering(649) 00:08:33.282 fused_ordering(650) 00:08:33.282 fused_ordering(651) 00:08:33.282 fused_ordering(652) 00:08:33.282 fused_ordering(653) 00:08:33.282 fused_ordering(654) 00:08:33.282 fused_ordering(655) 00:08:33.282 fused_ordering(656) 00:08:33.282 fused_ordering(657) 00:08:33.282 fused_ordering(658) 00:08:33.282 fused_ordering(659) 00:08:33.282 fused_ordering(660) 00:08:33.282 fused_ordering(661) 00:08:33.282 fused_ordering(662) 00:08:33.282 fused_ordering(663) 00:08:33.282 fused_ordering(664) 00:08:33.282 fused_ordering(665) 00:08:33.282 fused_ordering(666) 00:08:33.282 fused_ordering(667) 00:08:33.282 fused_ordering(668) 00:08:33.282 fused_ordering(669) 00:08:33.282 fused_ordering(670) 00:08:33.282 fused_ordering(671) 00:08:33.282 fused_ordering(672) 00:08:33.282 fused_ordering(673) 00:08:33.282 fused_ordering(674) 00:08:33.282 fused_ordering(675) 00:08:33.282 fused_ordering(676) 00:08:33.282 fused_ordering(677) 00:08:33.282 fused_ordering(678) 00:08:33.282 fused_ordering(679) 00:08:33.282 fused_ordering(680) 00:08:33.282 fused_ordering(681) 00:08:33.282 fused_ordering(682) 00:08:33.282 fused_ordering(683) 00:08:33.282 fused_ordering(684) 00:08:33.282 fused_ordering(685) 00:08:33.282 fused_ordering(686) 00:08:33.282 fused_ordering(687) 00:08:33.282 fused_ordering(688) 00:08:33.282 fused_ordering(689) 00:08:33.282 fused_ordering(690) 00:08:33.282 fused_ordering(691) 00:08:33.282 fused_ordering(692) 00:08:33.282 fused_ordering(693) 00:08:33.282 fused_ordering(694) 00:08:33.282 fused_ordering(695) 00:08:33.282 fused_ordering(696) 00:08:33.282 fused_ordering(697) 00:08:33.282 fused_ordering(698) 00:08:33.282 fused_ordering(699) 00:08:33.282 fused_ordering(700) 00:08:33.282 fused_ordering(701) 00:08:33.282 fused_ordering(702) 00:08:33.282 fused_ordering(703) 00:08:33.282 fused_ordering(704) 00:08:33.282 fused_ordering(705) 00:08:33.282 fused_ordering(706) 00:08:33.282 fused_ordering(707) 00:08:33.282 fused_ordering(708) 00:08:33.282 fused_ordering(709) 00:08:33.282 fused_ordering(710) 00:08:33.282 fused_ordering(711) 00:08:33.282 fused_ordering(712) 00:08:33.282 fused_ordering(713) 00:08:33.282 fused_ordering(714) 00:08:33.282 fused_ordering(715) 00:08:33.282 fused_ordering(716) 00:08:33.282 fused_ordering(717) 00:08:33.282 fused_ordering(718) 00:08:33.282 fused_ordering(719) 00:08:33.282 fused_ordering(720) 00:08:33.282 fused_ordering(721) 00:08:33.282 fused_ordering(722) 00:08:33.282 fused_ordering(723) 00:08:33.282 fused_ordering(724) 00:08:33.282 fused_ordering(725) 00:08:33.282 fused_ordering(726) 00:08:33.282 fused_ordering(727) 00:08:33.282 fused_ordering(728) 00:08:33.282 fused_ordering(729) 00:08:33.282 fused_ordering(730) 00:08:33.282 fused_ordering(731) 00:08:33.282 fused_ordering(732) 00:08:33.282 fused_ordering(733) 00:08:33.282 fused_ordering(734) 00:08:33.282 fused_ordering(735) 00:08:33.282 fused_ordering(736) 00:08:33.282 fused_ordering(737) 00:08:33.282 fused_ordering(738) 00:08:33.283 fused_ordering(739) 00:08:33.283 fused_ordering(740) 00:08:33.283 fused_ordering(741) 00:08:33.283 fused_ordering(742) 00:08:33.283 fused_ordering(743) 00:08:33.283 fused_ordering(744) 00:08:33.283 fused_ordering(745) 00:08:33.283 fused_ordering(746) 00:08:33.283 fused_ordering(747) 00:08:33.283 fused_ordering(748) 00:08:33.283 fused_ordering(749) 00:08:33.283 fused_ordering(750) 00:08:33.283 fused_ordering(751) 00:08:33.283 fused_ordering(752) 00:08:33.283 fused_ordering(753) 00:08:33.283 fused_ordering(754) 00:08:33.283 fused_ordering(755) 00:08:33.283 fused_ordering(756) 00:08:33.283 fused_ordering(757) 00:08:33.283 fused_ordering(758) 00:08:33.283 fused_ordering(759) 00:08:33.283 fused_ordering(760) 00:08:33.283 fused_ordering(761) 00:08:33.283 fused_ordering(762) 00:08:33.283 fused_ordering(763) 00:08:33.283 fused_ordering(764) 00:08:33.283 fused_ordering(765) 00:08:33.283 fused_ordering(766) 00:08:33.283 fused_ordering(767) 00:08:33.283 fused_ordering(768) 00:08:33.283 fused_ordering(769) 00:08:33.283 fused_ordering(770) 00:08:33.283 fused_ordering(771) 00:08:33.283 fused_ordering(772) 00:08:33.283 fused_ordering(773) 00:08:33.283 fused_ordering(774) 00:08:33.283 fused_ordering(775) 00:08:33.283 fused_ordering(776) 00:08:33.283 fused_ordering(777) 00:08:33.283 fused_ordering(778) 00:08:33.283 fused_ordering(779) 00:08:33.283 fused_ordering(780) 00:08:33.283 fused_ordering(781) 00:08:33.283 fused_ordering(782) 00:08:33.283 fused_ordering(783) 00:08:33.283 fused_ordering(784) 00:08:33.283 fused_ordering(785) 00:08:33.283 fused_ordering(786) 00:08:33.283 fused_ordering(787) 00:08:33.283 fused_ordering(788) 00:08:33.283 fused_ordering(789) 00:08:33.283 fused_ordering(790) 00:08:33.283 fused_ordering(791) 00:08:33.283 fused_ordering(792) 00:08:33.283 fused_ordering(793) 00:08:33.283 fused_ordering(794) 00:08:33.283 fused_ordering(795) 00:08:33.283 fused_ordering(796) 00:08:33.283 fused_ordering(797) 00:08:33.283 fused_ordering(798) 00:08:33.283 fused_ordering(799) 00:08:33.283 fused_ordering(800) 00:08:33.283 fused_ordering(801) 00:08:33.283 fused_ordering(802) 00:08:33.283 fused_ordering(803) 00:08:33.283 fused_ordering(804) 00:08:33.283 fused_ordering(805) 00:08:33.283 fused_ordering(806) 00:08:33.283 fused_ordering(807) 00:08:33.283 fused_ordering(808) 00:08:33.283 fused_ordering(809) 00:08:33.283 fused_ordering(810) 00:08:33.283 fused_ordering(811) 00:08:33.283 fused_ordering(812) 00:08:33.283 fused_ordering(813) 00:08:33.283 fused_ordering(814) 00:08:33.283 fused_ordering(815) 00:08:33.283 fused_ordering(816) 00:08:33.283 fused_ordering(817) 00:08:33.283 fused_ordering(818) 00:08:33.283 fused_ordering(819) 00:08:33.283 fused_ordering(820) 00:08:33.847 fused_ordering(821) 00:08:33.847 fused_ordering(822) 00:08:33.847 fused_ordering(823) 00:08:33.847 fused_ordering(824) 00:08:33.847 fused_ordering(825) 00:08:33.847 fused_ordering(826) 00:08:33.847 fused_ordering(827) 00:08:33.847 fused_ordering(828) 00:08:33.847 fused_ordering(829) 00:08:33.847 fused_ordering(830) 00:08:33.847 fused_ordering(831) 00:08:33.847 fused_ordering(832) 00:08:33.847 fused_ordering(833) 00:08:33.847 fused_ordering(834) 00:08:33.847 fused_ordering(835) 00:08:33.847 fused_ordering(836) 00:08:33.847 fused_ordering(837) 00:08:33.847 fused_ordering(838) 00:08:33.847 fused_ordering(839) 00:08:33.847 fused_ordering(840) 00:08:33.847 fused_ordering(841) 00:08:33.847 fused_ordering(842) 00:08:33.847 fused_ordering(843) 00:08:33.847 fused_ordering(844) 00:08:33.847 fused_ordering(845) 00:08:33.847 fused_ordering(846) 00:08:33.847 fused_ordering(847) 00:08:33.847 fused_ordering(848) 00:08:33.847 fused_ordering(849) 00:08:33.847 fused_ordering(850) 00:08:33.847 fused_ordering(851) 00:08:33.847 fused_ordering(852) 00:08:33.847 fused_ordering(853) 00:08:33.847 fused_ordering(854) 00:08:33.847 fused_ordering(855) 00:08:33.847 fused_ordering(856) 00:08:33.847 fused_ordering(857) 00:08:33.847 fused_ordering(858) 00:08:33.847 fused_ordering(859) 00:08:33.847 fused_ordering(860) 00:08:33.847 fused_ordering(861) 00:08:33.847 fused_ordering(862) 00:08:33.847 fused_ordering(863) 00:08:33.847 fused_ordering(864) 00:08:33.847 fused_ordering(865) 00:08:33.847 fused_ordering(866) 00:08:33.847 fused_ordering(867) 00:08:33.847 fused_ordering(868) 00:08:33.847 fused_ordering(869) 00:08:33.847 fused_ordering(870) 00:08:33.847 fused_ordering(871) 00:08:33.847 fused_ordering(872) 00:08:33.847 fused_ordering(873) 00:08:33.847 fused_ordering(874) 00:08:33.847 fused_ordering(875) 00:08:33.847 fused_ordering(876) 00:08:33.847 fused_ordering(877) 00:08:33.847 fused_ordering(878) 00:08:33.847 fused_ordering(879) 00:08:33.847 fused_ordering(880) 00:08:33.847 fused_ordering(881) 00:08:33.847 fused_ordering(882) 00:08:33.847 fused_ordering(883) 00:08:33.847 fused_ordering(884) 00:08:33.847 fused_ordering(885) 00:08:33.847 fused_ordering(886) 00:08:33.847 fused_ordering(887) 00:08:33.847 fused_ordering(888) 00:08:33.847 fused_ordering(889) 00:08:33.847 fused_ordering(890) 00:08:33.847 fused_ordering(891) 00:08:33.847 fused_ordering(892) 00:08:33.847 fused_ordering(893) 00:08:33.847 fused_ordering(894) 00:08:33.847 fused_ordering(895) 00:08:33.847 fused_ordering(896) 00:08:33.847 fused_ordering(897) 00:08:33.847 fused_ordering(898) 00:08:33.847 fused_ordering(899) 00:08:33.847 fused_ordering(900) 00:08:33.847 fused_ordering(901) 00:08:33.847 fused_ordering(902) 00:08:33.847 fused_ordering(903) 00:08:33.847 fused_ordering(904) 00:08:33.847 fused_ordering(905) 00:08:33.847 fused_ordering(906) 00:08:33.847 fused_ordering(907) 00:08:33.847 fused_ordering(908) 00:08:33.847 fused_ordering(909) 00:08:33.847 fused_ordering(910) 00:08:33.847 fused_ordering(911) 00:08:33.847 fused_ordering(912) 00:08:33.847 fused_ordering(913) 00:08:33.847 fused_ordering(914) 00:08:33.847 fused_ordering(915) 00:08:33.847 fused_ordering(916) 00:08:33.847 fused_ordering(917) 00:08:33.847 fused_ordering(918) 00:08:33.847 fused_ordering(919) 00:08:33.847 fused_ordering(920) 00:08:33.847 fused_ordering(921) 00:08:33.847 fused_ordering(922) 00:08:33.847 fused_ordering(923) 00:08:33.847 fused_ordering(924) 00:08:33.847 fused_ordering(925) 00:08:33.847 fused_ordering(926) 00:08:33.847 fused_ordering(927) 00:08:33.847 fused_ordering(928) 00:08:33.847 fused_ordering(929) 00:08:33.847 fused_ordering(930) 00:08:33.847 fused_ordering(931) 00:08:33.847 fused_ordering(932) 00:08:33.847 fused_ordering(933) 00:08:33.847 fused_ordering(934) 00:08:33.847 fused_ordering(935) 00:08:33.847 fused_ordering(936) 00:08:33.847 fused_ordering(937) 00:08:33.847 fused_ordering(938) 00:08:33.847 fused_ordering(939) 00:08:33.847 fused_ordering(940) 00:08:33.847 fused_ordering(941) 00:08:33.847 fused_ordering(942) 00:08:33.847 fused_ordering(943) 00:08:33.847 fused_ordering(944) 00:08:33.847 fused_ordering(945) 00:08:33.847 fused_ordering(946) 00:08:33.847 fused_ordering(947) 00:08:33.847 fused_ordering(948) 00:08:33.847 fused_ordering(949) 00:08:33.847 fused_ordering(950) 00:08:33.847 fused_ordering(951) 00:08:33.847 fused_ordering(952) 00:08:33.847 fused_ordering(953) 00:08:33.847 fused_ordering(954) 00:08:33.847 fused_ordering(955) 00:08:33.847 fused_ordering(956) 00:08:33.847 fused_ordering(957) 00:08:33.847 fused_ordering(958) 00:08:33.847 fused_ordering(959) 00:08:33.847 fused_ordering(960) 00:08:33.847 fused_ordering(961) 00:08:33.847 fused_ordering(962) 00:08:33.847 fused_ordering(963) 00:08:33.847 fused_ordering(964) 00:08:33.847 fused_ordering(965) 00:08:33.847 fused_ordering(966) 00:08:33.847 fused_ordering(967) 00:08:33.847 fused_ordering(968) 00:08:33.847 fused_ordering(969) 00:08:33.847 fused_ordering(970) 00:08:33.847 fused_ordering(971) 00:08:33.847 fused_ordering(972) 00:08:33.847 fused_ordering(973) 00:08:33.847 fused_ordering(974) 00:08:33.847 fused_ordering(975) 00:08:33.847 fused_ordering(976) 00:08:33.847 fused_ordering(977) 00:08:33.847 fused_ordering(978) 00:08:33.847 fused_ordering(979) 00:08:33.847 fused_ordering(980) 00:08:33.847 fused_ordering(981) 00:08:33.847 fused_ordering(982) 00:08:33.847 fused_ordering(983) 00:08:33.847 fused_ordering(984) 00:08:33.847 fused_ordering(985) 00:08:33.847 fused_ordering(986) 00:08:33.847 fused_ordering(987) 00:08:33.847 fused_ordering(988) 00:08:33.847 fused_ordering(989) 00:08:33.847 fused_ordering(990) 00:08:33.847 fused_ordering(991) 00:08:33.847 fused_ordering(992) 00:08:33.847 fused_ordering(993) 00:08:33.847 fused_ordering(994) 00:08:33.847 fused_ordering(995) 00:08:33.847 fused_ordering(996) 00:08:33.847 fused_ordering(997) 00:08:33.847 fused_ordering(998) 00:08:33.847 fused_ordering(999) 00:08:33.847 fused_ordering(1000) 00:08:33.847 fused_ordering(1001) 00:08:33.847 fused_ordering(1002) 00:08:33.847 fused_ordering(1003) 00:08:33.847 fused_ordering(1004) 00:08:33.847 fused_ordering(1005) 00:08:33.847 fused_ordering(1006) 00:08:33.847 fused_ordering(1007) 00:08:33.847 fused_ordering(1008) 00:08:33.847 fused_ordering(1009) 00:08:33.847 fused_ordering(1010) 00:08:33.847 fused_ordering(1011) 00:08:33.847 fused_ordering(1012) 00:08:33.847 fused_ordering(1013) 00:08:33.847 fused_ordering(1014) 00:08:33.847 fused_ordering(1015) 00:08:33.847 fused_ordering(1016) 00:08:33.847 fused_ordering(1017) 00:08:33.847 fused_ordering(1018) 00:08:33.847 fused_ordering(1019) 00:08:33.847 fused_ordering(1020) 00:08:33.847 fused_ordering(1021) 00:08:33.847 fused_ordering(1022) 00:08:33.847 fused_ordering(1023) 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.847 rmmod nvme_tcp 00:08:33.847 rmmod nvme_fabrics 00:08:33.847 rmmod nvme_keyring 00:08:33.847 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1133543 ']' 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1133543 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1133543 ']' 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1133543 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1133543 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1133543' 00:08:33.848 killing process with pid 1133543 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1133543 00:08:33.848 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1133543 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.107 10:25:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.029 10:25:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.029 00:08:36.029 real 0m7.389s 00:08:36.029 user 0m4.909s 00:08:36.029 sys 0m3.077s 00:08:36.029 10:25:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.029 10:25:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:36.029 ************************************ 00:08:36.029 END TEST nvmf_fused_ordering 00:08:36.029 ************************************ 00:08:36.294 10:25:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.294 10:25:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:36.294 10:25:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.294 10:25:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.294 10:25:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 ************************************ 00:08:36.294 START TEST nvmf_delete_subsystem 00:08:36.294 ************************************ 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:36.294 * Looking for test storage... 00:08:36.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.294 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.295 10:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:38.869 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:38.869 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:38.869 Found net devices under 0000:09:00.0: cvl_0_0 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:38.869 Found net devices under 0000:09:00.1: cvl_0_1 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:08:38.869 00:08:38.869 --- 10.0.0.2 ping statistics --- 00:08:38.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.869 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:38.869 00:08:38.869 --- 10.0.0.1 ping statistics --- 00:08:38.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.869 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1135896 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1135896 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1135896 ']' 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.869 10:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-07-15 10:25:27.027217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.869 [2024-07-15 10:25:27.027307] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.869 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.869 [2024-07-15 10:25:27.092693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.869 [2024-07-15 10:25:27.202862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.869 [2024-07-15 10:25:27.202931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.869 [2024-07-15 10:25:27.202960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.869 [2024-07-15 10:25:27.202972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.869 [2024-07-15 10:25:27.202982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.869 [2024-07-15 10:25:27.203049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.869 [2024-07-15 10:25:27.203054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-07-15 10:25:27.350458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-07-15 10:25:27.366637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 NULL1 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 Delay0 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1135918 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:38.869 10:25:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:39.125 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.125 [2024-07-15 10:25:27.441329] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:41.016 10:25:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.016 10:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.016 10:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.016 starting I/O failed: -6 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.016 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 [2024-07-15 10:25:29.522608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e867a0 is same with the state(5) to be set 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 starting I/O failed: -6 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 [2024-07-15 10:25:29.523123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b64000c00 is same with the state(5) to be set 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.017 Write completed with error (sct=0, sc=8) 00:08:41.017 Read completed with error (sct=0, sc=8) 00:08:41.982 [2024-07-15 10:25:30.495944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87ac0 is same with the state(5) to be set 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 [2024-07-15 10:25:30.521417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b6400d2f0 is same with the state(5) to be set 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 [2024-07-15 10:25:30.527229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e865c0 is same with the state(5) to be set 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Write completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.982 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 [2024-07-15 10:25:30.527539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e863e0 is same with the state(5) to be set 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 Write completed with error (sct=0, sc=8) 00:08:41.983 Read completed with error (sct=0, sc=8) 00:08:41.983 [2024-07-15 10:25:30.527763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e86980 is same with the state(5) to be set 00:08:41.983 Initializing NVMe Controllers 00:08:41.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:41.983 Controller IO queue size 128, less than required. 00:08:41.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:41.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:41.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:41.983 Initialization complete. Launching workers. 00:08:41.983 ======================================================== 00:08:41.983 Latency(us) 00:08:41.983 Device Information : IOPS MiB/s Average min max 00:08:41.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.70 0.09 960897.52 880.39 1013874.27 00:08:41.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.93 0.07 918054.15 290.13 1012610.37 00:08:41.983 ======================================================== 00:08:41.983 Total : 322.64 0.16 941519.13 290.13 1013874.27 00:08:41.983 00:08:41.983 [2024-07-15 10:25:30.528698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87ac0 (9): Bad file descriptor 00:08:41.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:41.983 10:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.983 10:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:41.983 10:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1135918 00:08:41.983 10:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:42.546 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:42.546 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1135918 00:08:42.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1135918) - No such process 00:08:42.546 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1135918 00:08:42.546 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:42.546 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1135918 00:08:42.546 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1135918 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.547 [2024-07-15 10:25:31.050560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1136441 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:42.547 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.547 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.803 [2024-07-15 10:25:31.113150] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:43.060 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.060 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:43.060 10:25:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.624 10:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.624 10:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:43.624 10:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.189 10:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.189 10:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:44.189 10:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.752 10:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.752 10:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:44.752 10:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.315 10:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.316 10:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:45.316 10:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.573 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.573 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:45.573 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.830 Initializing NVMe Controllers 00:08:45.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.830 Controller IO queue size 128, less than required. 00:08:45.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:45.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:45.830 Initialization complete. Launching workers. 00:08:45.830 ======================================================== 00:08:45.830 Latency(us) 00:08:45.830 Device Information : IOPS MiB/s Average min max 00:08:45.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003388.48 1000226.94 1011976.12 00:08:45.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004908.30 1000200.39 1042031.43 00:08:45.830 ======================================================== 00:08:45.830 Total : 256.00 0.12 1004148.39 1000200.39 1042031.43 00:08:45.830 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1136441 00:08:46.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1136441) - No such process 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1136441 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.087 rmmod nvme_tcp 00:08:46.087 rmmod nvme_fabrics 00:08:46.087 rmmod nvme_keyring 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1135896 ']' 00:08:46.087 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1135896 00:08:46.346 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1135896 ']' 00:08:46.346 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1135896 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1135896 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1135896' 00:08:46.347 killing process with pid 1135896 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1135896 00:08:46.347 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1135896 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.607 10:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.511 10:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.511 00:08:48.511 real 0m12.351s 00:08:48.511 user 0m27.600s 00:08:48.511 sys 0m3.081s 00:08:48.511 10:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.511 10:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 ************************************ 00:08:48.511 END TEST nvmf_delete_subsystem 00:08:48.511 ************************************ 00:08:48.511 10:25:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:48.511 10:25:36 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:48.511 10:25:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.511 10:25:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.511 10:25:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 ************************************ 00:08:48.511 START TEST nvmf_ns_masking 00:08:48.511 ************************************ 00:08:48.511 10:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:48.768 * Looking for test storage... 00:08:48.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3c1583e8-59b1-469e-bd66-e233d3cb3714 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9abdcafc-6e88-4ab5-91cd-17c227aa1c05 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c087e10d-b904-408f-b54c-3576c84ffaec 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.768 10:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:50.664 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:50.664 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:50.664 Found net devices under 0000:09:00.0: cvl_0_0 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:50.664 Found net devices under 0000:09:00.1: cvl_0_1 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:08:50.664 00:08:50.664 --- 10.0.0.2 ping statistics --- 00:08:50.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.664 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:50.664 00:08:50.664 --- 10.0.0.1 ping statistics --- 00:08:50.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.664 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1138785 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1138785 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1138785 ']' 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.664 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.665 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.665 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.665 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:50.921 [2024-07-15 10:25:39.254968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:50.921 [2024-07-15 10:25:39.255037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.921 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.921 [2024-07-15 10:25:39.314672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.921 [2024-07-15 10:25:39.416216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.921 [2024-07-15 10:25:39.416269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.921 [2024-07-15 10:25:39.416292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.921 [2024-07-15 10:25:39.416303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.921 [2024-07-15 10:25:39.416313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.921 [2024-07-15 10:25:39.416336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.177 10:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.434 [2024-07-15 10:25:39.772112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.434 10:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:51.434 10:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:51.434 10:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:51.692 Malloc1 00:08:51.692 10:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:51.949 Malloc2 00:08:51.949 10:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.206 10:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:52.464 10:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.720 [2024-07-15 10:25:41.099377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.720 10:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:08:52.720 10:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c087e10d-b904-408f-b54c-3576c84ffaec -a 10.0.0.2 -s 4420 -i 4 00:08:52.976 10:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.976 10:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:52.976 10:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.976 10:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:52.976 10:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:54.866 [ 0]:0x1 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:54.866 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:55.123 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=05bcb82007fc48f4ac345b66f9ef902c 00:08:55.123 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 05bcb82007fc48f4ac345b66f9ef902c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.123 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:55.380 [ 0]:0x1 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=05bcb82007fc48f4ac345b66f9ef902c 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 05bcb82007fc48f4ac345b66f9ef902c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:55.380 [ 1]:0x2 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.380 10:25:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.637 10:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:55.892 10:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:08:55.892 10:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c087e10d-b904-408f-b54c-3576c84ffaec -a 10.0.0.2 -s 4420 -i 4 00:08:56.148 10:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:56.148 10:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:56.148 10:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.148 10:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:08:56.148 10:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:08:56.148 10:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:58.054 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.309 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:58.310 [ 0]:0x2 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.310 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:58.567 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:08:58.567 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:58.567 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:58.567 [ 0]:0x1 00:08:58.567 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:58.567 10:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=05bcb82007fc48f4ac345b66f9ef902c 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 05bcb82007fc48f4ac345b66f9ef902c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:58.567 [ 1]:0x2 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.567 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:58.825 [ 0]:0x2 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:58.825 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:59.082 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:08:59.082 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.082 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:08:59.082 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.082 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c087e10d-b904-408f-b54c-3576c84ffaec -a 10.0.0.2 -s 4420 -i 4 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:08:59.340 10:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:01.864 [ 0]:0x1 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=05bcb82007fc48f4ac345b66f9ef902c 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 05bcb82007fc48f4ac345b66f9ef902c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.864 [ 1]:0x2 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.864 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:09:01.865 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.865 10:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:01.865 [ 0]:0x2 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:01.865 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:02.121 [2024-07-15 10:25:50.595709] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:02.121 request: 00:09:02.121 { 00:09:02.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:02.121 "nsid": 2, 00:09:02.121 "host": "nqn.2016-06.io.spdk:host1", 00:09:02.121 "method": "nvmf_ns_remove_host", 00:09:02.121 "req_id": 1 00:09:02.121 } 00:09:02.121 Got JSON-RPC error response 00:09:02.121 response: 00:09:02.121 { 00:09:02.121 "code": -32602, 00:09:02.122 "message": "Invalid parameters" 00:09:02.122 } 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:02.122 [ 0]:0x2 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:02.122 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd27954b8f1c45ba8d30ab32e8d7565a 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd27954b8f1c45ba8d30ab32e8d7565a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1140279 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1140279 /var/tmp/host.sock 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1140279 ']' 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:02.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.379 10:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.379 [2024-07-15 10:25:50.802461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:02.379 [2024-07-15 10:25:50.802570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140279 ] 00:09:02.379 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.379 [2024-07-15 10:25:50.864113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.636 [2024-07-15 10:25:50.973477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.893 10:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.893 10:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:02.893 10:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.151 10:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.408 10:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3c1583e8-59b1-469e-bd66-e233d3cb3714 00:09:03.408 10:25:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:03.408 10:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3C1583E859B1469EBD66E233D3CB3714 -i 00:09:03.666 10:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9abdcafc-6e88-4ab5-91cd-17c227aa1c05 00:09:03.666 10:25:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:03.666 10:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9ABDCAFC6E884AB591CD17C227AA1C05 -i 00:09:03.922 10:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:04.180 10:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:04.439 10:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:04.439 10:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:04.743 nvme0n1 00:09:04.743 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:04.743 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:05.031 nvme1n2 00:09:05.288 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:05.288 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:05.288 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:05.288 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:05.288 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:05.545 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:05.545 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:05.545 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:05.545 10:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:05.802 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3c1583e8-59b1-469e-bd66-e233d3cb3714 == \3\c\1\5\8\3\e\8\-\5\9\b\1\-\4\6\9\e\-\b\d\6\6\-\e\2\3\3\d\3\c\b\3\7\1\4 ]] 00:09:05.802 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:05.802 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:05.802 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9abdcafc-6e88-4ab5-91cd-17c227aa1c05 == \9\a\b\d\c\a\f\c\-\6\e\8\8\-\4\a\b\5\-\9\1\c\d\-\1\7\c\2\2\7\a\a\1\c\0\5 ]] 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1140279 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1140279 ']' 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1140279 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140279 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140279' 00:09:06.059 killing process with pid 1140279 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1140279 00:09:06.059 10:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1140279 00:09:06.316 10:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.881 rmmod nvme_tcp 00:09:06.881 rmmod nvme_fabrics 00:09:06.881 rmmod nvme_keyring 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1138785 ']' 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1138785 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1138785 ']' 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1138785 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1138785 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1138785' 00:09:06.881 killing process with pid 1138785 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1138785 00:09:06.881 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1138785 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.139 10:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.043 10:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.043 00:09:09.043 real 0m20.551s 00:09:09.043 user 0m26.885s 00:09:09.043 sys 0m3.999s 00:09:09.043 10:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.043 10:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:09.043 ************************************ 00:09:09.043 END TEST nvmf_ns_masking 00:09:09.043 ************************************ 00:09:09.301 10:25:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.301 10:25:57 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:09.301 10:25:57 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:09.301 10:25:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.301 10:25:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.301 10:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.302 ************************************ 00:09:09.302 START TEST nvmf_nvme_cli 00:09:09.302 ************************************ 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:09.302 * Looking for test storage... 00:09:09.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.302 10:25:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:11.832 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:11.832 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:11.832 Found net devices under 0000:09:00.0: cvl_0_0 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:11.832 Found net devices under 0000:09:00.1: cvl_0_1 00:09:11.832 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:09:11.833 00:09:11.833 --- 10.0.0.2 ping statistics --- 00:09:11.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.833 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:09:11.833 00:09:11.833 --- 10.0.0.1 ping statistics --- 00:09:11.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.833 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1142779 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1142779 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1142779 ']' 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.833 10:25:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:11.833 [2024-07-15 10:26:00.005456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.833 [2024-07-15 10:26:00.005535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.833 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.833 [2024-07-15 10:26:00.079780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.833 [2024-07-15 10:26:00.186879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.833 [2024-07-15 10:26:00.186925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.833 [2024-07-15 10:26:00.186948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.833 [2024-07-15 10:26:00.186960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.833 [2024-07-15 10:26:00.186970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.833 [2024-07-15 10:26:00.187016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.833 [2024-07-15 10:26:00.187072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.833 [2024-07-15 10:26:00.187128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.833 [2024-07-15 10:26:00.187131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:11.833 [2024-07-15 10:26:00.346667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:11.833 Malloc0 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.833 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 Malloc1 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 [2024-07-15 10:26:00.432486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:12.092 00:09:12.092 Discovery Log Number of Records 2, Generation counter 2 00:09:12.092 =====Discovery Log Entry 0====== 00:09:12.092 trtype: tcp 00:09:12.092 adrfam: ipv4 00:09:12.092 subtype: current discovery subsystem 00:09:12.092 treq: not required 00:09:12.092 portid: 0 00:09:12.092 trsvcid: 4420 00:09:12.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:12.092 traddr: 10.0.0.2 00:09:12.092 eflags: explicit discovery connections, duplicate discovery information 00:09:12.092 sectype: none 00:09:12.092 =====Discovery Log Entry 1====== 00:09:12.092 trtype: tcp 00:09:12.092 adrfam: ipv4 00:09:12.092 subtype: nvme subsystem 00:09:12.092 treq: not required 00:09:12.092 portid: 0 00:09:12.092 trsvcid: 4420 00:09:12.092 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:12.092 traddr: 10.0.0.2 00:09:12.092 eflags: none 00:09:12.092 sectype: none 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:12.092 10:26:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.671 10:26:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:12.671 10:26:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.671 10:26:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.671 10:26:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:12.671 10:26:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:12.672 10:26:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:15.191 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:15.192 /dev/nvme0n1 ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:15.192 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.449 rmmod nvme_tcp 00:09:15.449 rmmod nvme_fabrics 00:09:15.449 rmmod nvme_keyring 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1142779 ']' 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1142779 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1142779 ']' 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1142779 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1142779 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1142779' 00:09:15.449 killing process with pid 1142779 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1142779 00:09:15.449 10:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1142779 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.709 10:26:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.243 10:26:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.243 00:09:18.243 real 0m8.621s 00:09:18.243 user 0m16.346s 00:09:18.243 sys 0m2.302s 00:09:18.243 10:26:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.243 10:26:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.243 ************************************ 00:09:18.243 END TEST nvmf_nvme_cli 00:09:18.243 ************************************ 00:09:18.243 10:26:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:18.243 10:26:06 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:18.243 10:26:06 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:18.243 10:26:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.243 10:26:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.243 10:26:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.243 ************************************ 00:09:18.243 START TEST nvmf_vfio_user 00:09:18.243 ************************************ 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:18.243 * Looking for test storage... 00:09:18.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1143709 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1143709' 00:09:18.243 Process pid: 1143709 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1143709 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1143709 ']' 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.243 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:18.243 [2024-07-15 10:26:06.418497] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:18.243 [2024-07-15 10:26:06.418602] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.243 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.243 [2024-07-15 10:26:06.476066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.243 [2024-07-15 10:26:06.581481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.243 [2024-07-15 10:26:06.581539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.243 [2024-07-15 10:26:06.581561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.243 [2024-07-15 10:26:06.581571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.243 [2024-07-15 10:26:06.581581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.243 [2024-07-15 10:26:06.581720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.243 [2024-07-15 10:26:06.581836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.244 [2024-07-15 10:26:06.581866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.244 [2024-07-15 10:26:06.581868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.244 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.244 10:26:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:18.244 10:26:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:19.175 10:26:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:19.741 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:19.741 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:19.741 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:19.741 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:19.741 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:19.741 Malloc1 00:09:19.998 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:20.256 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:20.514 10:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:20.771 10:26:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:20.771 10:26:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:20.771 10:26:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:21.028 Malloc2 00:09:21.028 10:26:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:21.285 10:26:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:21.543 10:26:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:21.801 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:21.801 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:21.801 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:21.801 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:21.801 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:21.801 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:21.801 [2024-07-15 10:26:10.118169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:21.801 [2024-07-15 10:26:10.118209] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144127 ] 00:09:21.801 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.801 [2024-07-15 10:26:10.152074] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:21.801 [2024-07-15 10:26:10.161973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:21.801 [2024-07-15 10:26:10.162003] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3f19e7d000 00:09:21.801 [2024-07-15 10:26:10.162972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.163962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.164971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.165974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.166977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.167984] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.168987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.169992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:21.801 [2024-07-15 10:26:10.170999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:21.801 [2024-07-15 10:26:10.171019] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3f19e72000 00:09:21.801 [2024-07-15 10:26:10.172149] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:21.801 [2024-07-15 10:26:10.186475] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:21.801 [2024-07-15 10:26:10.186507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:21.801 [2024-07-15 10:26:10.191118] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:21.801 [2024-07-15 10:26:10.191187] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:21.801 [2024-07-15 10:26:10.191282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:21.801 [2024-07-15 10:26:10.191313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:21.801 [2024-07-15 10:26:10.191324] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:21.801 [2024-07-15 10:26:10.192127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:21.801 [2024-07-15 10:26:10.192153] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:21.801 [2024-07-15 10:26:10.192167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:21.801 [2024-07-15 10:26:10.193811] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:21.801 [2024-07-15 10:26:10.193832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:21.801 [2024-07-15 10:26:10.193846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:21.801 [2024-07-15 10:26:10.194135] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:21.801 [2024-07-15 10:26:10.194152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:21.801 [2024-07-15 10:26:10.195520] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:21.801 [2024-07-15 10:26:10.195541] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:21.801 [2024-07-15 10:26:10.195550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:21.801 [2024-07-15 10:26:10.195561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:21.801 [2024-07-15 10:26:10.195670] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:21.801 [2024-07-15 10:26:10.195678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:21.801 [2024-07-15 10:26:10.195687] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:21.801 [2024-07-15 10:26:10.196484] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:21.801 [2024-07-15 10:26:10.197488] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:21.801 [2024-07-15 10:26:10.198500] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:21.801 [2024-07-15 10:26:10.199500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:21.801 [2024-07-15 10:26:10.199814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:21.801 [2024-07-15 10:26:10.200513] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:21.801 [2024-07-15 10:26:10.200531] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:21.801 [2024-07-15 10:26:10.200540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.200563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:21.801 [2024-07-15 10:26:10.200678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.200710] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:21.801 [2024-07-15 10:26:10.200721] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:21.801 [2024-07-15 10:26:10.200741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.200845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.200866] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:21.801 [2024-07-15 10:26:10.200878] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:21.801 [2024-07-15 10:26:10.200886] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:21.801 [2024-07-15 10:26:10.200894] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:21.801 [2024-07-15 10:26:10.200901] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:21.801 [2024-07-15 10:26:10.200909] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:21.801 [2024-07-15 10:26:10.200916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.200930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.200945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.200960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.200982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:21.801 [2024-07-15 10:26:10.200995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:21.801 [2024-07-15 10:26:10.201007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:21.801 [2024-07-15 10:26:10.201019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:21.801 [2024-07-15 10:26:10.201028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201084] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:21.801 [2024-07-15 10:26:10.201101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201255] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:21.801 [2024-07-15 10:26:10.201263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:21.801 [2024-07-15 10:26:10.201272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201309] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:21.801 [2024-07-15 10:26:10.201326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201352] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:21.801 [2024-07-15 10:26:10.201360] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:21.801 [2024-07-15 10:26:10.201369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201442] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:21.801 [2024-07-15 10:26:10.201450] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:21.801 [2024-07-15 10:26:10.201459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201550] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:21.801 [2024-07-15 10:26:10.201557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:21.801 [2024-07-15 10:26:10.201565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:21.801 [2024-07-15 10:26:10.201590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:21.801 [2024-07-15 10:26:10.201717] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:21.801 [2024-07-15 10:26:10.201727] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:21.801 [2024-07-15 10:26:10.201733] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:21.801 [2024-07-15 10:26:10.201739] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:21.801 [2024-07-15 10:26:10.201748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:21.801 [2024-07-15 10:26:10.201759] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:21.801 [2024-07-15 10:26:10.201767] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:21.801 [2024-07-15 10:26:10.201776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201819] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:21.801 [2024-07-15 10:26:10.201829] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:21.801 [2024-07-15 10:26:10.201839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:21.801 [2024-07-15 10:26:10.201851] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:21.801 [2024-07-15 10:26:10.201874] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:21.802 [2024-07-15 10:26:10.201884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:21.802 [2024-07-15 10:26:10.201900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.201921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.201940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.201953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:21.802 ===================================================== 00:09:21.802 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:21.802 ===================================================== 00:09:21.802 Controller Capabilities/Features 00:09:21.802 ================================ 00:09:21.802 Vendor ID: 4e58 00:09:21.802 Subsystem Vendor ID: 4e58 00:09:21.802 Serial Number: SPDK1 00:09:21.802 Model Number: SPDK bdev Controller 00:09:21.802 Firmware Version: 24.09 00:09:21.802 Recommended Arb Burst: 6 00:09:21.802 IEEE OUI Identifier: 8d 6b 50 00:09:21.802 Multi-path I/O 00:09:21.802 May have multiple subsystem ports: Yes 00:09:21.802 May have multiple controllers: Yes 00:09:21.802 Associated with SR-IOV VF: No 00:09:21.802 Max Data Transfer Size: 131072 00:09:21.802 Max Number of Namespaces: 32 00:09:21.802 Max Number of I/O Queues: 127 00:09:21.802 NVMe Specification Version (VS): 1.3 00:09:21.802 NVMe Specification Version (Identify): 1.3 00:09:21.802 Maximum Queue Entries: 256 00:09:21.802 Contiguous Queues Required: Yes 00:09:21.802 Arbitration Mechanisms Supported 00:09:21.802 Weighted Round Robin: Not Supported 00:09:21.802 Vendor Specific: Not Supported 00:09:21.802 Reset Timeout: 15000 ms 00:09:21.802 Doorbell Stride: 4 bytes 00:09:21.802 NVM Subsystem Reset: Not Supported 00:09:21.802 Command Sets Supported 00:09:21.802 NVM Command Set: Supported 00:09:21.802 Boot Partition: Not Supported 00:09:21.802 Memory Page Size Minimum: 4096 bytes 00:09:21.802 Memory Page Size Maximum: 4096 bytes 00:09:21.802 Persistent Memory Region: Not Supported 00:09:21.802 Optional Asynchronous Events Supported 00:09:21.802 Namespace Attribute Notices: Supported 00:09:21.802 Firmware Activation Notices: Not Supported 00:09:21.802 ANA Change Notices: Not Supported 00:09:21.802 PLE Aggregate Log Change Notices: Not Supported 00:09:21.802 LBA Status Info Alert Notices: Not Supported 00:09:21.802 EGE Aggregate Log Change Notices: Not Supported 00:09:21.802 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.802 Zone Descriptor Change Notices: Not Supported 00:09:21.802 Discovery Log Change Notices: Not Supported 00:09:21.802 Controller Attributes 00:09:21.802 128-bit Host Identifier: Supported 00:09:21.802 Non-Operational Permissive Mode: Not Supported 00:09:21.802 NVM Sets: Not Supported 00:09:21.802 Read Recovery Levels: Not Supported 00:09:21.802 Endurance Groups: Not Supported 00:09:21.802 Predictable Latency Mode: Not Supported 00:09:21.802 Traffic Based Keep ALive: Not Supported 00:09:21.802 Namespace Granularity: Not Supported 00:09:21.802 SQ Associations: Not Supported 00:09:21.802 UUID List: Not Supported 00:09:21.802 Multi-Domain Subsystem: Not Supported 00:09:21.802 Fixed Capacity Management: Not Supported 00:09:21.802 Variable Capacity Management: Not Supported 00:09:21.802 Delete Endurance Group: Not Supported 00:09:21.802 Delete NVM Set: Not Supported 00:09:21.802 Extended LBA Formats Supported: Not Supported 00:09:21.802 Flexible Data Placement Supported: Not Supported 00:09:21.802 00:09:21.802 Controller Memory Buffer Support 00:09:21.802 ================================ 00:09:21.802 Supported: No 00:09:21.802 00:09:21.802 Persistent Memory Region Support 00:09:21.802 ================================ 00:09:21.802 Supported: No 00:09:21.802 00:09:21.802 Admin Command Set Attributes 00:09:21.802 ============================ 00:09:21.802 Security Send/Receive: Not Supported 00:09:21.802 Format NVM: Not Supported 00:09:21.802 Firmware Activate/Download: Not Supported 00:09:21.802 Namespace Management: Not Supported 00:09:21.802 Device Self-Test: Not Supported 00:09:21.802 Directives: Not Supported 00:09:21.802 NVMe-MI: Not Supported 00:09:21.802 Virtualization Management: Not Supported 00:09:21.802 Doorbell Buffer Config: Not Supported 00:09:21.802 Get LBA Status Capability: Not Supported 00:09:21.802 Command & Feature Lockdown Capability: Not Supported 00:09:21.802 Abort Command Limit: 4 00:09:21.802 Async Event Request Limit: 4 00:09:21.802 Number of Firmware Slots: N/A 00:09:21.802 Firmware Slot 1 Read-Only: N/A 00:09:21.802 Firmware Activation Without Reset: N/A 00:09:21.802 Multiple Update Detection Support: N/A 00:09:21.802 Firmware Update Granularity: No Information Provided 00:09:21.802 Per-Namespace SMART Log: No 00:09:21.802 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.802 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:21.802 Command Effects Log Page: Supported 00:09:21.802 Get Log Page Extended Data: Supported 00:09:21.802 Telemetry Log Pages: Not Supported 00:09:21.802 Persistent Event Log Pages: Not Supported 00:09:21.802 Supported Log Pages Log Page: May Support 00:09:21.802 Commands Supported & Effects Log Page: Not Supported 00:09:21.802 Feature Identifiers & Effects Log Page:May Support 00:09:21.802 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.802 Data Area 4 for Telemetry Log: Not Supported 00:09:21.802 Error Log Page Entries Supported: 128 00:09:21.802 Keep Alive: Supported 00:09:21.802 Keep Alive Granularity: 10000 ms 00:09:21.802 00:09:21.802 NVM Command Set Attributes 00:09:21.802 ========================== 00:09:21.802 Submission Queue Entry Size 00:09:21.802 Max: 64 00:09:21.802 Min: 64 00:09:21.802 Completion Queue Entry Size 00:09:21.802 Max: 16 00:09:21.802 Min: 16 00:09:21.802 Number of Namespaces: 32 00:09:21.802 Compare Command: Supported 00:09:21.802 Write Uncorrectable Command: Not Supported 00:09:21.802 Dataset Management Command: Supported 00:09:21.802 Write Zeroes Command: Supported 00:09:21.802 Set Features Save Field: Not Supported 00:09:21.802 Reservations: Not Supported 00:09:21.802 Timestamp: Not Supported 00:09:21.802 Copy: Supported 00:09:21.802 Volatile Write Cache: Present 00:09:21.802 Atomic Write Unit (Normal): 1 00:09:21.802 Atomic Write Unit (PFail): 1 00:09:21.802 Atomic Compare & Write Unit: 1 00:09:21.802 Fused Compare & Write: Supported 00:09:21.802 Scatter-Gather List 00:09:21.802 SGL Command Set: Supported (Dword aligned) 00:09:21.802 SGL Keyed: Not Supported 00:09:21.802 SGL Bit Bucket Descriptor: Not Supported 00:09:21.802 SGL Metadata Pointer: Not Supported 00:09:21.802 Oversized SGL: Not Supported 00:09:21.802 SGL Metadata Address: Not Supported 00:09:21.802 SGL Offset: Not Supported 00:09:21.802 Transport SGL Data Block: Not Supported 00:09:21.802 Replay Protected Memory Block: Not Supported 00:09:21.802 00:09:21.802 Firmware Slot Information 00:09:21.802 ========================= 00:09:21.802 Active slot: 1 00:09:21.802 Slot 1 Firmware Revision: 24.09 00:09:21.802 00:09:21.802 00:09:21.802 Commands Supported and Effects 00:09:21.802 ============================== 00:09:21.802 Admin Commands 00:09:21.802 -------------- 00:09:21.802 Get Log Page (02h): Supported 00:09:21.802 Identify (06h): Supported 00:09:21.802 Abort (08h): Supported 00:09:21.802 Set Features (09h): Supported 00:09:21.802 Get Features (0Ah): Supported 00:09:21.802 Asynchronous Event Request (0Ch): Supported 00:09:21.802 Keep Alive (18h): Supported 00:09:21.802 I/O Commands 00:09:21.802 ------------ 00:09:21.802 Flush (00h): Supported LBA-Change 00:09:21.802 Write (01h): Supported LBA-Change 00:09:21.802 Read (02h): Supported 00:09:21.802 Compare (05h): Supported 00:09:21.802 Write Zeroes (08h): Supported LBA-Change 00:09:21.802 Dataset Management (09h): Supported LBA-Change 00:09:21.802 Copy (19h): Supported LBA-Change 00:09:21.802 00:09:21.802 Error Log 00:09:21.802 ========= 00:09:21.802 00:09:21.802 Arbitration 00:09:21.802 =========== 00:09:21.802 Arbitration Burst: 1 00:09:21.802 00:09:21.802 Power Management 00:09:21.802 ================ 00:09:21.802 Number of Power States: 1 00:09:21.802 Current Power State: Power State #0 00:09:21.802 Power State #0: 00:09:21.802 Max Power: 0.00 W 00:09:21.802 Non-Operational State: Operational 00:09:21.802 Entry Latency: Not Reported 00:09:21.802 Exit Latency: Not Reported 00:09:21.802 Relative Read Throughput: 0 00:09:21.802 Relative Read Latency: 0 00:09:21.802 Relative Write Throughput: 0 00:09:21.802 Relative Write Latency: 0 00:09:21.802 Idle Power: Not Reported 00:09:21.802 Active Power: Not Reported 00:09:21.802 Non-Operational Permissive Mode: Not Supported 00:09:21.802 00:09:21.802 Health Information 00:09:21.802 ================== 00:09:21.802 Critical Warnings: 00:09:21.802 Available Spare Space: OK 00:09:21.802 Temperature: OK 00:09:21.802 Device Reliability: OK 00:09:21.802 Read Only: No 00:09:21.802 Volatile Memory Backup: OK 00:09:21.802 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:21.802 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:21.802 Available Spare: 0% 00:09:21.802 Available Sp[2024-07-15 10:26:10.202087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:21.802 [2024-07-15 10:26:10.202104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.202165] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:21.802 [2024-07-15 10:26:10.202199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.202211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.202221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.202230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:21.802 [2024-07-15 10:26:10.207813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:21.802 [2024-07-15 10:26:10.207837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:21.802 [2024-07-15 10:26:10.208556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:21.802 [2024-07-15 10:26:10.208642] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:21.802 [2024-07-15 10:26:10.208656] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:21.802 [2024-07-15 10:26:10.209548] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:21.802 [2024-07-15 10:26:10.209572] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:21.802 [2024-07-15 10:26:10.209627] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:21.802 [2024-07-15 10:26:10.211605] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:21.802 are Threshold: 0% 00:09:21.802 Life Percentage Used: 0% 00:09:21.802 Data Units Read: 0 00:09:21.802 Data Units Written: 0 00:09:21.802 Host Read Commands: 0 00:09:21.802 Host Write Commands: 0 00:09:21.802 Controller Busy Time: 0 minutes 00:09:21.802 Power Cycles: 0 00:09:21.802 Power On Hours: 0 hours 00:09:21.802 Unsafe Shutdowns: 0 00:09:21.802 Unrecoverable Media Errors: 0 00:09:21.802 Lifetime Error Log Entries: 0 00:09:21.802 Warning Temperature Time: 0 minutes 00:09:21.802 Critical Temperature Time: 0 minutes 00:09:21.802 00:09:21.802 Number of Queues 00:09:21.802 ================ 00:09:21.802 Number of I/O Submission Queues: 127 00:09:21.802 Number of I/O Completion Queues: 127 00:09:21.802 00:09:21.802 Active Namespaces 00:09:21.802 ================= 00:09:21.802 Namespace ID:1 00:09:21.802 Error Recovery Timeout: Unlimited 00:09:21.802 Command Set Identifier: NVM (00h) 00:09:21.802 Deallocate: Supported 00:09:21.802 Deallocated/Unwritten Error: Not Supported 00:09:21.802 Deallocated Read Value: Unknown 00:09:21.802 Deallocate in Write Zeroes: Not Supported 00:09:21.802 Deallocated Guard Field: 0xFFFF 00:09:21.802 Flush: Supported 00:09:21.802 Reservation: Supported 00:09:21.802 Namespace Sharing Capabilities: Multiple Controllers 00:09:21.802 Size (in LBAs): 131072 (0GiB) 00:09:21.802 Capacity (in LBAs): 131072 (0GiB) 00:09:21.802 Utilization (in LBAs): 131072 (0GiB) 00:09:21.802 NGUID: 1F8A2149E9BE4D239B689EE89816D81F 00:09:21.802 UUID: 1f8a2149-e9be-4d23-9b68-9ee89816d81f 00:09:21.802 Thin Provisioning: Not Supported 00:09:21.802 Per-NS Atomic Units: Yes 00:09:21.802 Atomic Boundary Size (Normal): 0 00:09:21.802 Atomic Boundary Size (PFail): 0 00:09:21.802 Atomic Boundary Offset: 0 00:09:21.802 Maximum Single Source Range Length: 65535 00:09:21.802 Maximum Copy Length: 65535 00:09:21.802 Maximum Source Range Count: 1 00:09:21.802 NGUID/EUI64 Never Reused: No 00:09:21.802 Namespace Write Protected: No 00:09:21.802 Number of LBA Formats: 1 00:09:21.802 Current LBA Format: LBA Format #00 00:09:21.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.802 00:09:21.802 10:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:21.802 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.059 [2024-07-15 10:26:10.446739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:27.315 Initializing NVMe Controllers 00:09:27.315 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:27.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:27.315 Initialization complete. Launching workers. 00:09:27.315 ======================================================== 00:09:27.315 Latency(us) 00:09:27.315 Device Information : IOPS MiB/s Average min max 00:09:27.315 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34437.59 134.52 3718.34 1149.40 10527.87 00:09:27.315 ======================================================== 00:09:27.315 Total : 34437.59 134.52 3718.34 1149.40 10527.87 00:09:27.315 00:09:27.315 [2024-07-15 10:26:15.471740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:27.315 10:26:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:27.315 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.315 [2024-07-15 10:26:15.711933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:32.573 Initializing NVMe Controllers 00:09:32.573 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:32.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:32.573 Initialization complete. Launching workers. 00:09:32.573 ======================================================== 00:09:32.573 Latency(us) 00:09:32.573 Device Information : IOPS MiB/s Average min max 00:09:32.573 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15912.82 62.16 8049.85 3792.27 15960.69 00:09:32.573 ======================================================== 00:09:32.573 Total : 15912.82 62.16 8049.85 3792.27 15960.69 00:09:32.573 00:09:32.573 [2024-07-15 10:26:20.749641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:32.573 10:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:32.573 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.573 [2024-07-15 10:26:20.949635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:37.848 [2024-07-15 10:26:26.030212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:37.848 Initializing NVMe Controllers 00:09:37.848 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:37.848 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:37.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:37.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:37.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:37.848 Initialization complete. Launching workers. 00:09:37.848 Starting thread on core 2 00:09:37.848 Starting thread on core 3 00:09:37.848 Starting thread on core 1 00:09:37.848 10:26:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:37.848 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.848 [2024-07-15 10:26:26.328294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:41.129 [2024-07-15 10:26:29.378086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:41.129 Initializing NVMe Controllers 00:09:41.129 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:41.129 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:41.129 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:41.129 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:41.129 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:41.129 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:41.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:41.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:41.129 Initialization complete. Launching workers. 00:09:41.129 Starting thread on core 1 with urgent priority queue 00:09:41.129 Starting thread on core 2 with urgent priority queue 00:09:41.129 Starting thread on core 3 with urgent priority queue 00:09:41.129 Starting thread on core 0 with urgent priority queue 00:09:41.129 SPDK bdev Controller (SPDK1 ) core 0: 5556.00 IO/s 18.00 secs/100000 ios 00:09:41.129 SPDK bdev Controller (SPDK1 ) core 1: 5095.33 IO/s 19.63 secs/100000 ios 00:09:41.129 SPDK bdev Controller (SPDK1 ) core 2: 5293.67 IO/s 18.89 secs/100000 ios 00:09:41.129 SPDK bdev Controller (SPDK1 ) core 3: 5749.67 IO/s 17.39 secs/100000 ios 00:09:41.129 ======================================================== 00:09:41.129 00:09:41.129 10:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:41.129 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.386 [2024-07-15 10:26:29.688341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:41.386 Initializing NVMe Controllers 00:09:41.386 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:41.386 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:41.386 Namespace ID: 1 size: 0GB 00:09:41.386 Initialization complete. 00:09:41.386 INFO: using host memory buffer for IO 00:09:41.386 Hello world! 00:09:41.386 [2024-07-15 10:26:29.721971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:41.386 10:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:41.386 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.643 [2024-07-15 10:26:30.020293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:42.575 Initializing NVMe Controllers 00:09:42.575 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:42.575 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:42.575 Initialization complete. Launching workers. 00:09:42.575 submit (in ns) avg, min, max = 8560.1, 3488.9, 4015756.7 00:09:42.575 complete (in ns) avg, min, max = 25761.4, 2083.3, 6008691.1 00:09:42.575 00:09:42.575 Submit histogram 00:09:42.575 ================ 00:09:42.575 Range in us Cumulative Count 00:09:42.575 3.484 - 3.508: 0.1088% ( 15) 00:09:42.575 3.508 - 3.532: 0.5728% ( 64) 00:09:42.575 3.532 - 3.556: 1.9941% ( 196) 00:09:42.575 3.556 - 3.579: 6.3447% ( 600) 00:09:42.575 3.579 - 3.603: 12.8417% ( 896) 00:09:42.575 3.603 - 3.627: 20.9702% ( 1121) 00:09:42.575 3.627 - 3.650: 29.4032% ( 1163) 00:09:42.575 3.650 - 3.674: 37.4302% ( 1107) 00:09:42.575 3.674 - 3.698: 44.8626% ( 1025) 00:09:42.575 3.698 - 3.721: 52.3530% ( 1033) 00:09:42.575 3.721 - 3.745: 57.3127% ( 684) 00:09:42.575 3.745 - 3.769: 61.3661% ( 559) 00:09:42.575 3.769 - 3.793: 64.4478% ( 425) 00:09:42.575 3.793 - 3.816: 67.7471% ( 455) 00:09:42.575 3.816 - 3.840: 71.5104% ( 519) 00:09:42.575 3.840 - 3.864: 75.4768% ( 547) 00:09:42.575 3.864 - 3.887: 78.9935% ( 485) 00:09:42.575 3.887 - 3.911: 82.2130% ( 444) 00:09:42.575 3.911 - 3.935: 85.2513% ( 419) 00:09:42.575 3.935 - 3.959: 87.1220% ( 258) 00:09:42.575 3.959 - 3.982: 88.7898% ( 230) 00:09:42.575 3.982 - 4.006: 90.1530% ( 188) 00:09:42.575 4.006 - 4.030: 91.4437% ( 178) 00:09:42.575 4.030 - 4.053: 92.5386% ( 151) 00:09:42.575 4.053 - 4.077: 93.5610% ( 141) 00:09:42.575 4.077 - 4.101: 94.3006% ( 102) 00:09:42.575 4.101 - 4.124: 94.8372% ( 74) 00:09:42.575 4.124 - 4.148: 95.3738% ( 74) 00:09:42.575 4.148 - 4.172: 95.7799% ( 56) 00:09:42.575 4.172 - 4.196: 96.0554% ( 38) 00:09:42.575 4.196 - 4.219: 96.3164% ( 36) 00:09:42.575 4.219 - 4.243: 96.4397% ( 17) 00:09:42.575 4.243 - 4.267: 96.6065% ( 23) 00:09:42.575 4.267 - 4.290: 96.7080% ( 14) 00:09:42.575 4.290 - 4.314: 96.8748% ( 23) 00:09:42.575 4.314 - 4.338: 96.9908% ( 16) 00:09:42.575 4.338 - 4.361: 97.1068% ( 16) 00:09:42.575 4.361 - 4.385: 97.1866% ( 11) 00:09:42.575 4.385 - 4.409: 97.2808% ( 13) 00:09:42.575 4.409 - 4.433: 97.3751% ( 13) 00:09:42.575 4.433 - 4.456: 97.4114% ( 5) 00:09:42.575 4.480 - 4.504: 97.4621% ( 7) 00:09:42.575 4.504 - 4.527: 97.4911% ( 4) 00:09:42.575 4.551 - 4.575: 97.5129% ( 3) 00:09:42.575 4.599 - 4.622: 97.5274% ( 2) 00:09:42.575 4.622 - 4.646: 97.5491% ( 3) 00:09:42.575 4.646 - 4.670: 97.5709% ( 3) 00:09:42.575 4.670 - 4.693: 97.5781% ( 1) 00:09:42.575 4.693 - 4.717: 97.5854% ( 1) 00:09:42.575 4.717 - 4.741: 97.5926% ( 1) 00:09:42.575 4.741 - 4.764: 97.6216% ( 4) 00:09:42.575 4.764 - 4.788: 97.6651% ( 6) 00:09:42.575 4.788 - 4.812: 97.7014% ( 5) 00:09:42.575 4.812 - 4.836: 97.7377% ( 5) 00:09:42.575 4.836 - 4.859: 97.7739% ( 5) 00:09:42.575 4.859 - 4.883: 97.8392% ( 9) 00:09:42.575 4.883 - 4.907: 97.8827% ( 6) 00:09:42.575 4.907 - 4.930: 97.9044% ( 3) 00:09:42.575 4.930 - 4.954: 97.9407% ( 5) 00:09:42.575 4.954 - 4.978: 98.0059% ( 9) 00:09:42.575 4.978 - 5.001: 98.0785% ( 10) 00:09:42.575 5.001 - 5.025: 98.1002% ( 3) 00:09:42.575 5.025 - 5.049: 98.1147% ( 2) 00:09:42.575 5.049 - 5.073: 98.1437% ( 4) 00:09:42.575 5.073 - 5.096: 98.1727% ( 4) 00:09:42.575 5.096 - 5.120: 98.2017% ( 4) 00:09:42.575 5.120 - 5.144: 98.2380% ( 5) 00:09:42.575 5.144 - 5.167: 98.2742% ( 5) 00:09:42.575 5.167 - 5.191: 98.3032% ( 4) 00:09:42.575 5.191 - 5.215: 98.3395% ( 5) 00:09:42.575 5.215 - 5.239: 98.3613% ( 3) 00:09:42.575 5.239 - 5.262: 98.3830% ( 3) 00:09:42.575 5.262 - 5.286: 98.4120% ( 4) 00:09:42.575 5.310 - 5.333: 98.4338% ( 3) 00:09:42.575 5.333 - 5.357: 98.4483% ( 2) 00:09:42.575 5.381 - 5.404: 98.4700% ( 3) 00:09:42.575 5.404 - 5.428: 98.5063% ( 5) 00:09:42.575 5.428 - 5.452: 98.5135% ( 1) 00:09:42.575 5.452 - 5.476: 98.5280% ( 2) 00:09:42.575 5.476 - 5.499: 98.5425% ( 2) 00:09:42.575 5.499 - 5.523: 98.5570% ( 2) 00:09:42.575 5.523 - 5.547: 98.5643% ( 1) 00:09:42.575 5.547 - 5.570: 98.5860% ( 3) 00:09:42.575 5.618 - 5.641: 98.5933% ( 1) 00:09:42.575 5.641 - 5.665: 98.6005% ( 1) 00:09:42.575 5.689 - 5.713: 98.6150% ( 2) 00:09:42.575 5.713 - 5.736: 98.6368% ( 3) 00:09:42.575 5.807 - 5.831: 98.6440% ( 1) 00:09:42.575 5.831 - 5.855: 98.6513% ( 1) 00:09:42.575 5.926 - 5.950: 98.6585% ( 1) 00:09:42.575 6.044 - 6.068: 98.6658% ( 1) 00:09:42.575 6.116 - 6.163: 98.6803% ( 2) 00:09:42.575 6.163 - 6.210: 98.6948% ( 2) 00:09:42.575 6.258 - 6.305: 98.7021% ( 1) 00:09:42.575 6.353 - 6.400: 98.7093% ( 1) 00:09:42.575 6.590 - 6.637: 98.7166% ( 1) 00:09:42.575 6.874 - 6.921: 98.7238% ( 1) 00:09:42.575 7.064 - 7.111: 98.7311% ( 1) 00:09:42.575 7.111 - 7.159: 98.7383% ( 1) 00:09:42.575 7.159 - 7.206: 98.7456% ( 1) 00:09:42.575 7.443 - 7.490: 98.7528% ( 1) 00:09:42.575 7.490 - 7.538: 98.7673% ( 2) 00:09:42.575 7.538 - 7.585: 98.7746% ( 1) 00:09:42.575 7.633 - 7.680: 98.7818% ( 1) 00:09:42.575 7.680 - 7.727: 98.7891% ( 1) 00:09:42.575 7.727 - 7.775: 98.7963% ( 1) 00:09:42.575 7.775 - 7.822: 98.8036% ( 1) 00:09:42.575 7.822 - 7.870: 98.8181% ( 2) 00:09:42.575 7.964 - 8.012: 98.8253% ( 1) 00:09:42.575 8.012 - 8.059: 98.8326% ( 1) 00:09:42.575 8.059 - 8.107: 98.8398% ( 1) 00:09:42.575 8.154 - 8.201: 98.8471% ( 1) 00:09:42.575 8.201 - 8.249: 98.8543% ( 1) 00:09:42.575 8.249 - 8.296: 98.8616% ( 1) 00:09:42.575 8.296 - 8.344: 98.8688% ( 1) 00:09:42.575 8.344 - 8.391: 98.8833% ( 2) 00:09:42.575 8.391 - 8.439: 98.8906% ( 1) 00:09:42.575 8.533 - 8.581: 98.8978% ( 1) 00:09:42.575 8.628 - 8.676: 98.9123% ( 2) 00:09:42.575 8.676 - 8.723: 98.9196% ( 1) 00:09:42.575 8.723 - 8.770: 98.9268% ( 1) 00:09:42.575 8.913 - 8.960: 98.9341% ( 1) 00:09:42.575 9.055 - 9.102: 98.9486% ( 2) 00:09:42.575 9.292 - 9.339: 98.9558% ( 1) 00:09:42.575 9.339 - 9.387: 98.9631% ( 1) 00:09:42.575 9.529 - 9.576: 98.9703% ( 1) 00:09:42.575 9.576 - 9.624: 98.9776% ( 1) 00:09:42.575 9.671 - 9.719: 98.9848% ( 1) 00:09:42.575 10.003 - 10.050: 98.9921% ( 1) 00:09:42.575 10.193 - 10.240: 98.9993% ( 1) 00:09:42.575 10.904 - 10.951: 99.0066% ( 1) 00:09:42.575 11.425 - 11.473: 99.0138% ( 1) 00:09:42.575 11.473 - 11.520: 99.0211% ( 1) 00:09:42.575 11.710 - 11.757: 99.0284% ( 1) 00:09:42.575 11.804 - 11.852: 99.0356% ( 1) 00:09:42.575 11.947 - 11.994: 99.0429% ( 1) 00:09:42.575 12.610 - 12.705: 99.0501% ( 1) 00:09:42.575 12.705 - 12.800: 99.0574% ( 1) 00:09:42.575 13.084 - 13.179: 99.0719% ( 2) 00:09:42.575 13.559 - 13.653: 99.0791% ( 1) 00:09:42.575 14.222 - 14.317: 99.0864% ( 1) 00:09:42.575 14.791 - 14.886: 99.1009% ( 2) 00:09:42.575 15.170 - 15.265: 99.1081% ( 1) 00:09:42.575 16.877 - 16.972: 99.1226% ( 2) 00:09:42.575 16.972 - 17.067: 99.1299% ( 1) 00:09:42.575 17.067 - 17.161: 99.1371% ( 1) 00:09:42.575 17.161 - 17.256: 99.1516% ( 2) 00:09:42.575 17.256 - 17.351: 99.1734% ( 3) 00:09:42.575 17.351 - 17.446: 99.1951% ( 3) 00:09:42.575 17.446 - 17.541: 99.2241% ( 4) 00:09:42.575 17.541 - 17.636: 99.2604% ( 5) 00:09:42.575 17.636 - 17.730: 99.3039% ( 6) 00:09:42.575 17.730 - 17.825: 99.3474% ( 6) 00:09:42.575 17.825 - 17.920: 99.3619% ( 2) 00:09:42.575 17.920 - 18.015: 99.3909% ( 4) 00:09:42.575 18.015 - 18.110: 99.4199% ( 4) 00:09:42.575 18.110 - 18.204: 99.4924% ( 10) 00:09:42.575 18.204 - 18.299: 99.5214% ( 4) 00:09:42.575 18.299 - 18.394: 99.5722% ( 7) 00:09:42.575 18.394 - 18.489: 99.6519% ( 11) 00:09:42.575 18.489 - 18.584: 99.7027% ( 7) 00:09:42.575 18.584 - 18.679: 99.7462% ( 6) 00:09:42.575 18.679 - 18.773: 99.7535% ( 1) 00:09:42.575 18.773 - 18.868: 99.7680% ( 2) 00:09:42.575 19.058 - 19.153: 99.7752% ( 1) 00:09:42.575 19.153 - 19.247: 99.7897% ( 2) 00:09:42.575 19.247 - 19.342: 99.8187% ( 4) 00:09:42.575 19.437 - 19.532: 99.8332% ( 2) 00:09:42.575 19.721 - 19.816: 99.8405% ( 1) 00:09:42.575 20.101 - 20.196: 99.8477% ( 1) 00:09:42.575 20.290 - 20.385: 99.8550% ( 1) 00:09:42.575 25.221 - 25.410: 99.8695% ( 2) 00:09:42.575 37.736 - 37.926: 99.8767% ( 1) 00:09:42.575 2014.625 - 2026.761: 99.8840% ( 1) 00:09:42.575 2111.716 - 2123.852: 99.8912% ( 1) 00:09:42.575 3980.705 - 4004.978: 99.9492% ( 8) 00:09:42.575 4004.978 - 4029.250: 100.0000% ( 7) 00:09:42.575 00:09:42.575 Complete histogram 00:09:42.575 ================== 00:09:42.575 Range in us Cumulative Count 00:09:42.576 2.074 - 2.086: 0.1160% ( 16) 00:09:42.576 2.086 - 2.098: 15.8944% ( 2176) 00:09:42.576 2.098 - 2.110: 39.4678% ( 3251) 00:09:42.576 2.110 - 2.121: 42.6220% ( 435) 00:09:42.576 2.121 - 2.133: 54.5863% ( 1650) 00:09:42.576 2.133 - 2.145: 62.1492% ( 1043) 00:09:42.576 2.145 - 2.157: 64.2956% ( 296) 00:09:42.576 2.157 - 2.169: 71.8440% ( 1041) 00:09:42.576 2.169 - 2.181: 76.1656% ( 596) 00:09:42.576 2.181 - 2.193: 77.5941% ( 197) 00:09:42.576 2.193 - 2.204: 81.8722% ( 590) 00:09:42.576 2.204 - 2.216: 83.8300% ( 270) 00:09:42.576 2.216 - 2.228: 84.4391% ( 84) 00:09:42.576 2.228 - 2.240: 86.4187% ( 273) 00:09:42.576 2.240 - 2.252: 89.3046% ( 398) 00:09:42.576 2.252 - 2.264: 90.6896% ( 191) 00:09:42.576 2.264 - 2.276: 92.1761% ( 205) 00:09:42.576 2.276 - 2.287: 93.3362% ( 160) 00:09:42.576 2.287 - 2.299: 93.6843% ( 48) 00:09:42.576 2.299 - 2.311: 93.9598% ( 38) 00:09:42.576 2.311 - 2.323: 94.3586% ( 55) 00:09:42.576 2.323 - 2.335: 94.8517% ( 68) 00:09:42.576 2.335 - 2.347: 95.0402% ( 26) 00:09:42.576 2.347 - 2.359: 95.1055% ( 9) 00:09:42.576 2.359 - 2.370: 95.1490% ( 6) 00:09:42.576 2.370 - 2.382: 95.1998% ( 7) 00:09:42.576 2.382 - 2.394: 95.2795% ( 11) 00:09:42.576 2.394 - 2.406: 95.5623% ( 39) 00:09:42.576 2.406 - 2.418: 95.9031% ( 47) 00:09:42.576 2.418 - 2.430: 96.2077% ( 42) 00:09:42.576 2.430 - 2.441: 96.5920% ( 53) 00:09:42.576 2.441 - 2.453: 96.9255% ( 46) 00:09:42.576 2.453 - 2.465: 97.1358% ( 29) 00:09:42.576 2.465 - 2.477: 97.3171% ( 25) 00:09:42.576 2.477 - 2.489: 97.5201% ( 28) 00:09:42.576 2.489 - 2.501: 97.6651% ( 20) 00:09:42.576 2.501 - 2.513: 97.7812% ( 16) 00:09:42.576 2.513 - 2.524: 97.9117% ( 18) 00:09:42.576 2.524 - 2.536: 97.9769% ( 9) 00:09:42.576 2.536 - 2.548: 98.0640% ( 12) 00:09:42.576 2.548 - 2.560: 98.1075% ( 6) 00:09:42.576 2.560 - 2.572: 98.1365% ( 4) 00:09:42.576 2.572 - 2.584: 98.1510% ( 2) 00:09:42.576 2.584 - 2.596: 98.1582% ( 1) 00:09:42.576 2.619 - 2.631: 98.1872% ( 4) 00:09:42.576 2.631 - 2.643: 98.2017% ( 2) 00:09:42.576 2.643 - 2.655: 98.2235% ( 3) 00:09:42.576 2.655 - 2.667: 98.2380% ( 2) 00:09:42.576 2.679 - 2.690: 98.2525% ( 2) 00:09:42.576 2.702 - 2.714: 98.2597% ( 1) 00:09:42.576 2.714 - 2.726: 98.2815% ( 3) 00:09:42.576 2.726 - 2.738: 98.3032% ( 3) 00:09:42.576 2.738 - 2.750: 98.3105% ( 1) 00:09:42.576 2.750 - 2.761: 98.3250% ( 2) 00:09:42.576 2.761 - 2.773: 98.3395% ( 2) 00:09:42.576 2.773 - 2.785: 98.3467% ( 1) 00:09:42.576 2.785 - 2.797: 98.3613% ( 2) 00:09:42.576 2.797 - 2.809: 98.3685% ( 1) 00:09:42.576 2.821 - 2.833: 98.3758% ( 1) 00:09:42.576 2.833 - 2.844: 98.3830% ( 1) 00:09:42.576 2.856 - 2.868: 98.3975% ( 2) 00:09:42.576 2.868 - 2.880: 98.4048% ( 1) 00:09:42.576 2.892 - 2.904: 98.4120% ( 1) 00:09:42.576 2.916 - 2.927: 98.4193% ( 1) 00:09:42.576 2.939 - 2.951: 98.4410% ( 3) 00:09:42.576 2.951 - 2.963: 98.4555% ( 2) 00:09:42.576 2.963 - 2.975: 98.4628% ( 1) 00:09:42.576 2.975 - 2.987: 98.4700% ( 1) 00:09:42.576 2.999 - 3.010: 98.4773% ( 1) 00:09:42.576 3.010 - 3.022: 98.4845% ( 1) 00:09:42.576 3.022 - 3.034: 98.4918% ( 1) 00:09:42.576 3.034 - 3.058: 98.5063% ( 2) 00:09:42.576 3.058 - 3.081: 98.5135% ( 1) 00:09:42.576 3.105 - 3.129: 98.5280% ( 2) 00:09:42.576 3.129 - 3.153: 9[2024-07-15 10:26:31.040294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:42.576 8.5425% ( 2) 00:09:42.576 3.176 - 3.200: 98.5643% ( 3) 00:09:42.576 3.200 - 3.224: 98.5715% ( 1) 00:09:42.576 3.224 - 3.247: 98.5788% ( 1) 00:09:42.576 3.271 - 3.295: 98.5933% ( 2) 00:09:42.576 3.390 - 3.413: 98.6005% ( 1) 00:09:42.576 3.413 - 3.437: 98.6150% ( 2) 00:09:42.576 3.437 - 3.461: 98.6223% ( 1) 00:09:42.576 3.461 - 3.484: 98.6295% ( 1) 00:09:42.576 3.484 - 3.508: 98.6440% ( 2) 00:09:42.576 3.508 - 3.532: 98.6513% ( 1) 00:09:42.576 3.532 - 3.556: 98.6658% ( 2) 00:09:42.576 3.579 - 3.603: 98.6730% ( 1) 00:09:42.576 3.674 - 3.698: 98.6803% ( 1) 00:09:42.576 3.698 - 3.721: 98.6948% ( 2) 00:09:42.576 3.721 - 3.745: 98.7021% ( 1) 00:09:42.576 3.769 - 3.793: 98.7093% ( 1) 00:09:42.576 3.864 - 3.887: 98.7166% ( 1) 00:09:42.576 3.982 - 4.006: 98.7311% ( 2) 00:09:42.576 4.101 - 4.124: 98.7383% ( 1) 00:09:42.576 4.219 - 4.243: 98.7456% ( 1) 00:09:42.576 5.807 - 5.831: 98.7528% ( 1) 00:09:42.576 5.855 - 5.879: 98.7673% ( 2) 00:09:42.576 6.163 - 6.210: 98.7746% ( 1) 00:09:42.576 6.400 - 6.447: 98.7818% ( 1) 00:09:42.576 6.495 - 6.542: 98.7963% ( 2) 00:09:42.576 6.590 - 6.637: 98.8036% ( 1) 00:09:42.576 6.732 - 6.779: 98.8108% ( 1) 00:09:42.576 6.779 - 6.827: 98.8181% ( 1) 00:09:42.576 7.443 - 7.490: 98.8326% ( 2) 00:09:42.576 7.490 - 7.538: 98.8398% ( 1) 00:09:42.576 7.870 - 7.917: 98.8471% ( 1) 00:09:42.576 8.154 - 8.201: 98.8543% ( 1) 00:09:42.576 8.249 - 8.296: 98.8616% ( 1) 00:09:42.576 15.170 - 15.265: 98.8688% ( 1) 00:09:42.576 15.360 - 15.455: 98.8761% ( 1) 00:09:42.576 15.455 - 15.550: 98.8833% ( 1) 00:09:42.576 15.644 - 15.739: 98.8906% ( 1) 00:09:42.576 15.739 - 15.834: 98.9123% ( 3) 00:09:42.576 15.834 - 15.929: 98.9268% ( 2) 00:09:42.576 15.929 - 16.024: 98.9558% ( 4) 00:09:42.576 16.024 - 16.119: 98.9993% ( 6) 00:09:42.576 16.119 - 16.213: 99.0284% ( 4) 00:09:42.576 16.213 - 16.308: 99.0501% ( 3) 00:09:42.576 16.308 - 16.403: 99.0719% ( 3) 00:09:42.576 16.403 - 16.498: 99.0791% ( 1) 00:09:42.576 16.498 - 16.593: 99.1009% ( 3) 00:09:42.576 16.593 - 16.687: 99.1661% ( 9) 00:09:42.576 16.687 - 16.782: 99.1951% ( 4) 00:09:42.576 16.782 - 16.877: 99.2531% ( 8) 00:09:42.576 16.877 - 16.972: 99.2749% ( 3) 00:09:42.576 16.972 - 17.067: 99.2894% ( 2) 00:09:42.576 17.067 - 17.161: 99.2966% ( 1) 00:09:42.576 17.161 - 17.256: 99.3039% ( 1) 00:09:42.576 17.256 - 17.351: 99.3256% ( 3) 00:09:42.576 17.351 - 17.446: 99.3474% ( 3) 00:09:42.576 17.541 - 17.636: 99.3619% ( 2) 00:09:42.576 17.825 - 17.920: 99.3764% ( 2) 00:09:42.576 17.920 - 18.015: 99.3837% ( 1) 00:09:42.576 18.110 - 18.204: 99.3982% ( 2) 00:09:42.576 18.679 - 18.773: 99.4054% ( 1) 00:09:42.576 18.773 - 18.868: 99.4127% ( 1) 00:09:42.576 21.523 - 21.618: 99.4199% ( 1) 00:09:42.576 2038.898 - 2051.034: 99.4272% ( 1) 00:09:42.576 3980.705 - 4004.978: 99.7390% ( 43) 00:09:42.576 4004.978 - 4029.250: 99.9782% ( 33) 00:09:42.576 5971.058 - 5995.330: 99.9855% ( 1) 00:09:42.576 5995.330 - 6019.603: 100.0000% ( 2) 00:09:42.576 00:09:42.576 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:42.576 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:42.576 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:42.576 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:42.576 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:42.833 [ 00:09:42.833 { 00:09:42.833 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:42.833 "subtype": "Discovery", 00:09:42.833 "listen_addresses": [], 00:09:42.833 "allow_any_host": true, 00:09:42.833 "hosts": [] 00:09:42.833 }, 00:09:42.833 { 00:09:42.833 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:42.833 "subtype": "NVMe", 00:09:42.833 "listen_addresses": [ 00:09:42.833 { 00:09:42.833 "trtype": "VFIOUSER", 00:09:42.833 "adrfam": "IPv4", 00:09:42.833 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:42.833 "trsvcid": "0" 00:09:42.833 } 00:09:42.833 ], 00:09:42.833 "allow_any_host": true, 00:09:42.833 "hosts": [], 00:09:42.833 "serial_number": "SPDK1", 00:09:42.833 "model_number": "SPDK bdev Controller", 00:09:42.833 "max_namespaces": 32, 00:09:42.833 "min_cntlid": 1, 00:09:42.833 "max_cntlid": 65519, 00:09:42.833 "namespaces": [ 00:09:42.833 { 00:09:42.833 "nsid": 1, 00:09:42.833 "bdev_name": "Malloc1", 00:09:42.833 "name": "Malloc1", 00:09:42.833 "nguid": "1F8A2149E9BE4D239B689EE89816D81F", 00:09:42.833 "uuid": "1f8a2149-e9be-4d23-9b68-9ee89816d81f" 00:09:42.833 } 00:09:42.833 ] 00:09:42.833 }, 00:09:42.833 { 00:09:42.833 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:42.833 "subtype": "NVMe", 00:09:42.833 "listen_addresses": [ 00:09:42.833 { 00:09:42.833 "trtype": "VFIOUSER", 00:09:42.833 "adrfam": "IPv4", 00:09:42.833 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:42.833 "trsvcid": "0" 00:09:42.833 } 00:09:42.833 ], 00:09:42.833 "allow_any_host": true, 00:09:42.833 "hosts": [], 00:09:42.833 "serial_number": "SPDK2", 00:09:42.833 "model_number": "SPDK bdev Controller", 00:09:42.833 "max_namespaces": 32, 00:09:42.833 "min_cntlid": 1, 00:09:42.833 "max_cntlid": 65519, 00:09:42.833 "namespaces": [ 00:09:42.833 { 00:09:42.833 "nsid": 1, 00:09:42.833 "bdev_name": "Malloc2", 00:09:42.833 "name": "Malloc2", 00:09:42.833 "nguid": "F40377090EDB4F30AC997A46A301C6D3", 00:09:42.833 "uuid": "f4037709-0edb-4f30-ac99-7a46a301c6d3" 00:09:42.833 } 00:09:42.833 ] 00:09:42.833 } 00:09:42.833 ] 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1146652 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:42.833 10:26:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:09:42.834 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:42.834 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:43.091 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.091 [2024-07-15 10:26:31.527254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:43.347 Malloc3 00:09:43.347 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:43.347 [2024-07-15 10:26:31.880724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:43.347 10:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:43.604 Asynchronous Event Request test 00:09:43.604 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:43.604 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:43.604 Registering asynchronous event callbacks... 00:09:43.604 Starting namespace attribute notice tests for all controllers... 00:09:43.604 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:43.604 aer_cb - Changed Namespace 00:09:43.604 Cleaning up... 00:09:43.604 [ 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:43.604 "subtype": "Discovery", 00:09:43.604 "listen_addresses": [], 00:09:43.604 "allow_any_host": true, 00:09:43.604 "hosts": [] 00:09:43.604 }, 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:43.604 "subtype": "NVMe", 00:09:43.604 "listen_addresses": [ 00:09:43.604 { 00:09:43.604 "trtype": "VFIOUSER", 00:09:43.604 "adrfam": "IPv4", 00:09:43.604 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:43.604 "trsvcid": "0" 00:09:43.604 } 00:09:43.604 ], 00:09:43.604 "allow_any_host": true, 00:09:43.604 "hosts": [], 00:09:43.604 "serial_number": "SPDK1", 00:09:43.604 "model_number": "SPDK bdev Controller", 00:09:43.604 "max_namespaces": 32, 00:09:43.604 "min_cntlid": 1, 00:09:43.604 "max_cntlid": 65519, 00:09:43.604 "namespaces": [ 00:09:43.604 { 00:09:43.604 "nsid": 1, 00:09:43.604 "bdev_name": "Malloc1", 00:09:43.604 "name": "Malloc1", 00:09:43.604 "nguid": "1F8A2149E9BE4D239B689EE89816D81F", 00:09:43.604 "uuid": "1f8a2149-e9be-4d23-9b68-9ee89816d81f" 00:09:43.604 }, 00:09:43.604 { 00:09:43.604 "nsid": 2, 00:09:43.604 "bdev_name": "Malloc3", 00:09:43.604 "name": "Malloc3", 00:09:43.604 "nguid": "BD3B8CA6A1294311B7766AEDFABD7140", 00:09:43.604 "uuid": "bd3b8ca6-a129-4311-b776-6aedfabd7140" 00:09:43.604 } 00:09:43.604 ] 00:09:43.604 }, 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:43.604 "subtype": "NVMe", 00:09:43.604 "listen_addresses": [ 00:09:43.604 { 00:09:43.604 "trtype": "VFIOUSER", 00:09:43.604 "adrfam": "IPv4", 00:09:43.604 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:43.604 "trsvcid": "0" 00:09:43.604 } 00:09:43.604 ], 00:09:43.604 "allow_any_host": true, 00:09:43.604 "hosts": [], 00:09:43.604 "serial_number": "SPDK2", 00:09:43.604 "model_number": "SPDK bdev Controller", 00:09:43.604 "max_namespaces": 32, 00:09:43.605 "min_cntlid": 1, 00:09:43.605 "max_cntlid": 65519, 00:09:43.605 "namespaces": [ 00:09:43.605 { 00:09:43.605 "nsid": 1, 00:09:43.605 "bdev_name": "Malloc2", 00:09:43.605 "name": "Malloc2", 00:09:43.605 "nguid": "F40377090EDB4F30AC997A46A301C6D3", 00:09:43.605 "uuid": "f4037709-0edb-4f30-ac99-7a46a301c6d3" 00:09:43.605 } 00:09:43.605 ] 00:09:43.605 } 00:09:43.605 ] 00:09:43.605 10:26:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1146652 00:09:43.605 10:26:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:43.605 10:26:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:43.605 10:26:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:43.605 10:26:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:43.864 [2024-07-15 10:26:32.157757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:43.864 [2024-07-15 10:26:32.157799] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146718 ] 00:09:43.864 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.864 [2024-07-15 10:26:32.191596] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:43.864 [2024-07-15 10:26:32.200088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:43.864 [2024-07-15 10:26:32.200134] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdb733eb000 00:09:43.864 [2024-07-15 10:26:32.201086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.202086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.203091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.204115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.205122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.206125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.207147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.208144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:43.864 [2024-07-15 10:26:32.209151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:43.864 [2024-07-15 10:26:32.209172] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdb733e0000 00:09:43.864 [2024-07-15 10:26:32.210285] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:43.864 [2024-07-15 10:26:32.225492] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:43.864 [2024-07-15 10:26:32.225530] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:43.864 [2024-07-15 10:26:32.230620] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:43.864 [2024-07-15 10:26:32.230672] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:43.864 [2024-07-15 10:26:32.230758] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:43.864 [2024-07-15 10:26:32.230806] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:43.864 [2024-07-15 10:26:32.230819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:43.864 [2024-07-15 10:26:32.231621] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:43.864 [2024-07-15 10:26:32.231642] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:43.864 [2024-07-15 10:26:32.231655] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:43.864 [2024-07-15 10:26:32.232632] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:43.864 [2024-07-15 10:26:32.232652] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:43.864 [2024-07-15 10:26:32.232666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:43.864 [2024-07-15 10:26:32.233633] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:43.864 [2024-07-15 10:26:32.233654] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:43.864 [2024-07-15 10:26:32.234642] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:43.864 [2024-07-15 10:26:32.234664] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:43.864 [2024-07-15 10:26:32.234674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:43.864 [2024-07-15 10:26:32.234685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:43.864 [2024-07-15 10:26:32.234810] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:43.864 [2024-07-15 10:26:32.234821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:43.864 [2024-07-15 10:26:32.234835] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:43.864 [2024-07-15 10:26:32.235648] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:43.864 [2024-07-15 10:26:32.236656] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:43.864 [2024-07-15 10:26:32.237661] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:43.864 [2024-07-15 10:26:32.238659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:43.864 [2024-07-15 10:26:32.238723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:43.864 [2024-07-15 10:26:32.239675] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:43.864 [2024-07-15 10:26:32.239711] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:43.864 [2024-07-15 10:26:32.239721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:43.864 [2024-07-15 10:26:32.239745] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:43.864 [2024-07-15 10:26:32.239762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:43.864 [2024-07-15 10:26:32.239809] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:43.864 [2024-07-15 10:26:32.239821] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:43.864 [2024-07-15 10:26:32.239841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:43.864 [2024-07-15 10:26:32.247831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:43.864 [2024-07-15 10:26:32.247855] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:43.864 [2024-07-15 10:26:32.247869] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:43.864 [2024-07-15 10:26:32.247877] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:43.864 [2024-07-15 10:26:32.247885] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:43.864 [2024-07-15 10:26:32.247893] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:43.864 [2024-07-15 10:26:32.247901] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:43.864 [2024-07-15 10:26:32.247910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:43.864 [2024-07-15 10:26:32.247923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:43.864 [2024-07-15 10:26:32.247939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:43.864 [2024-07-15 10:26:32.255816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:43.864 [2024-07-15 10:26:32.255845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.864 [2024-07-15 10:26:32.255864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.865 [2024-07-15 10:26:32.255877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.865 [2024-07-15 10:26:32.255890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.865 [2024-07-15 10:26:32.255899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.255914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.255930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.263818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.263836] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:43.865 [2024-07-15 10:26:32.263846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.263858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.263868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.263882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.271812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.271883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.271909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.271923] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:43.865 [2024-07-15 10:26:32.271932] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:43.865 [2024-07-15 10:26:32.271942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.279829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.279858] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:43.865 [2024-07-15 10:26:32.279875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.279890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.279903] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:43.865 [2024-07-15 10:26:32.279911] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:43.865 [2024-07-15 10:26:32.279921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.287829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.287858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.287875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.287889] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:43.865 [2024-07-15 10:26:32.287898] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:43.865 [2024-07-15 10:26:32.287907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.295826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.295853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295918] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:43.865 [2024-07-15 10:26:32.295926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:43.865 [2024-07-15 10:26:32.295934] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:43.865 [2024-07-15 10:26:32.295960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.303813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.303839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.311817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.311844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.319813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.319840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.327812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.327848] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:43.865 [2024-07-15 10:26:32.327866] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:43.865 [2024-07-15 10:26:32.327873] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:43.865 [2024-07-15 10:26:32.327879] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:43.865 [2024-07-15 10:26:32.327889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:43.865 [2024-07-15 10:26:32.327902] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:43.865 [2024-07-15 10:26:32.327910] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:43.865 [2024-07-15 10:26:32.327919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.327936] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:43.865 [2024-07-15 10:26:32.327944] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:43.865 [2024-07-15 10:26:32.327953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.327965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:43.865 [2024-07-15 10:26:32.327973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:43.865 [2024-07-15 10:26:32.327982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:43.865 [2024-07-15 10:26:32.335833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.335860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.335893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:43.865 [2024-07-15 10:26:32.335906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:43.865 ===================================================== 00:09:43.865 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:43.865 ===================================================== 00:09:43.865 Controller Capabilities/Features 00:09:43.865 ================================ 00:09:43.865 Vendor ID: 4e58 00:09:43.865 Subsystem Vendor ID: 4e58 00:09:43.865 Serial Number: SPDK2 00:09:43.865 Model Number: SPDK bdev Controller 00:09:43.865 Firmware Version: 24.09 00:09:43.865 Recommended Arb Burst: 6 00:09:43.865 IEEE OUI Identifier: 8d 6b 50 00:09:43.865 Multi-path I/O 00:09:43.865 May have multiple subsystem ports: Yes 00:09:43.865 May have multiple controllers: Yes 00:09:43.865 Associated with SR-IOV VF: No 00:09:43.865 Max Data Transfer Size: 131072 00:09:43.865 Max Number of Namespaces: 32 00:09:43.865 Max Number of I/O Queues: 127 00:09:43.865 NVMe Specification Version (VS): 1.3 00:09:43.865 NVMe Specification Version (Identify): 1.3 00:09:43.865 Maximum Queue Entries: 256 00:09:43.865 Contiguous Queues Required: Yes 00:09:43.865 Arbitration Mechanisms Supported 00:09:43.865 Weighted Round Robin: Not Supported 00:09:43.865 Vendor Specific: Not Supported 00:09:43.865 Reset Timeout: 15000 ms 00:09:43.865 Doorbell Stride: 4 bytes 00:09:43.865 NVM Subsystem Reset: Not Supported 00:09:43.865 Command Sets Supported 00:09:43.865 NVM Command Set: Supported 00:09:43.865 Boot Partition: Not Supported 00:09:43.866 Memory Page Size Minimum: 4096 bytes 00:09:43.866 Memory Page Size Maximum: 4096 bytes 00:09:43.866 Persistent Memory Region: Not Supported 00:09:43.866 Optional Asynchronous Events Supported 00:09:43.866 Namespace Attribute Notices: Supported 00:09:43.866 Firmware Activation Notices: Not Supported 00:09:43.866 ANA Change Notices: Not Supported 00:09:43.866 PLE Aggregate Log Change Notices: Not Supported 00:09:43.866 LBA Status Info Alert Notices: Not Supported 00:09:43.866 EGE Aggregate Log Change Notices: Not Supported 00:09:43.866 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.866 Zone Descriptor Change Notices: Not Supported 00:09:43.866 Discovery Log Change Notices: Not Supported 00:09:43.866 Controller Attributes 00:09:43.866 128-bit Host Identifier: Supported 00:09:43.866 Non-Operational Permissive Mode: Not Supported 00:09:43.866 NVM Sets: Not Supported 00:09:43.866 Read Recovery Levels: Not Supported 00:09:43.866 Endurance Groups: Not Supported 00:09:43.866 Predictable Latency Mode: Not Supported 00:09:43.866 Traffic Based Keep ALive: Not Supported 00:09:43.866 Namespace Granularity: Not Supported 00:09:43.866 SQ Associations: Not Supported 00:09:43.866 UUID List: Not Supported 00:09:43.866 Multi-Domain Subsystem: Not Supported 00:09:43.866 Fixed Capacity Management: Not Supported 00:09:43.866 Variable Capacity Management: Not Supported 00:09:43.866 Delete Endurance Group: Not Supported 00:09:43.866 Delete NVM Set: Not Supported 00:09:43.866 Extended LBA Formats Supported: Not Supported 00:09:43.866 Flexible Data Placement Supported: Not Supported 00:09:43.866 00:09:43.866 Controller Memory Buffer Support 00:09:43.866 ================================ 00:09:43.866 Supported: No 00:09:43.866 00:09:43.866 Persistent Memory Region Support 00:09:43.866 ================================ 00:09:43.866 Supported: No 00:09:43.866 00:09:43.866 Admin Command Set Attributes 00:09:43.866 ============================ 00:09:43.866 Security Send/Receive: Not Supported 00:09:43.866 Format NVM: Not Supported 00:09:43.866 Firmware Activate/Download: Not Supported 00:09:43.866 Namespace Management: Not Supported 00:09:43.866 Device Self-Test: Not Supported 00:09:43.866 Directives: Not Supported 00:09:43.866 NVMe-MI: Not Supported 00:09:43.866 Virtualization Management: Not Supported 00:09:43.866 Doorbell Buffer Config: Not Supported 00:09:43.866 Get LBA Status Capability: Not Supported 00:09:43.866 Command & Feature Lockdown Capability: Not Supported 00:09:43.866 Abort Command Limit: 4 00:09:43.866 Async Event Request Limit: 4 00:09:43.866 Number of Firmware Slots: N/A 00:09:43.866 Firmware Slot 1 Read-Only: N/A 00:09:43.866 Firmware Activation Without Reset: N/A 00:09:43.866 Multiple Update Detection Support: N/A 00:09:43.866 Firmware Update Granularity: No Information Provided 00:09:43.866 Per-Namespace SMART Log: No 00:09:43.866 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.866 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:43.866 Command Effects Log Page: Supported 00:09:43.866 Get Log Page Extended Data: Supported 00:09:43.866 Telemetry Log Pages: Not Supported 00:09:43.866 Persistent Event Log Pages: Not Supported 00:09:43.866 Supported Log Pages Log Page: May Support 00:09:43.866 Commands Supported & Effects Log Page: Not Supported 00:09:43.866 Feature Identifiers & Effects Log Page:May Support 00:09:43.866 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.866 Data Area 4 for Telemetry Log: Not Supported 00:09:43.866 Error Log Page Entries Supported: 128 00:09:43.866 Keep Alive: Supported 00:09:43.866 Keep Alive Granularity: 10000 ms 00:09:43.866 00:09:43.866 NVM Command Set Attributes 00:09:43.866 ========================== 00:09:43.866 Submission Queue Entry Size 00:09:43.866 Max: 64 00:09:43.866 Min: 64 00:09:43.866 Completion Queue Entry Size 00:09:43.866 Max: 16 00:09:43.866 Min: 16 00:09:43.866 Number of Namespaces: 32 00:09:43.866 Compare Command: Supported 00:09:43.866 Write Uncorrectable Command: Not Supported 00:09:43.866 Dataset Management Command: Supported 00:09:43.866 Write Zeroes Command: Supported 00:09:43.866 Set Features Save Field: Not Supported 00:09:43.866 Reservations: Not Supported 00:09:43.866 Timestamp: Not Supported 00:09:43.866 Copy: Supported 00:09:43.866 Volatile Write Cache: Present 00:09:43.866 Atomic Write Unit (Normal): 1 00:09:43.866 Atomic Write Unit (PFail): 1 00:09:43.866 Atomic Compare & Write Unit: 1 00:09:43.866 Fused Compare & Write: Supported 00:09:43.866 Scatter-Gather List 00:09:43.866 SGL Command Set: Supported (Dword aligned) 00:09:43.866 SGL Keyed: Not Supported 00:09:43.866 SGL Bit Bucket Descriptor: Not Supported 00:09:43.866 SGL Metadata Pointer: Not Supported 00:09:43.866 Oversized SGL: Not Supported 00:09:43.866 SGL Metadata Address: Not Supported 00:09:43.866 SGL Offset: Not Supported 00:09:43.866 Transport SGL Data Block: Not Supported 00:09:43.866 Replay Protected Memory Block: Not Supported 00:09:43.866 00:09:43.866 Firmware Slot Information 00:09:43.866 ========================= 00:09:43.866 Active slot: 1 00:09:43.866 Slot 1 Firmware Revision: 24.09 00:09:43.866 00:09:43.866 00:09:43.866 Commands Supported and Effects 00:09:43.866 ============================== 00:09:43.866 Admin Commands 00:09:43.866 -------------- 00:09:43.866 Get Log Page (02h): Supported 00:09:43.866 Identify (06h): Supported 00:09:43.866 Abort (08h): Supported 00:09:43.866 Set Features (09h): Supported 00:09:43.866 Get Features (0Ah): Supported 00:09:43.866 Asynchronous Event Request (0Ch): Supported 00:09:43.866 Keep Alive (18h): Supported 00:09:43.866 I/O Commands 00:09:43.866 ------------ 00:09:43.866 Flush (00h): Supported LBA-Change 00:09:43.866 Write (01h): Supported LBA-Change 00:09:43.866 Read (02h): Supported 00:09:43.866 Compare (05h): Supported 00:09:43.866 Write Zeroes (08h): Supported LBA-Change 00:09:43.866 Dataset Management (09h): Supported LBA-Change 00:09:43.866 Copy (19h): Supported LBA-Change 00:09:43.866 00:09:43.866 Error Log 00:09:43.866 ========= 00:09:43.866 00:09:43.866 Arbitration 00:09:43.866 =========== 00:09:43.866 Arbitration Burst: 1 00:09:43.866 00:09:43.866 Power Management 00:09:43.866 ================ 00:09:43.866 Number of Power States: 1 00:09:43.866 Current Power State: Power State #0 00:09:43.866 Power State #0: 00:09:43.866 Max Power: 0.00 W 00:09:43.866 Non-Operational State: Operational 00:09:43.866 Entry Latency: Not Reported 00:09:43.866 Exit Latency: Not Reported 00:09:43.866 Relative Read Throughput: 0 00:09:43.866 Relative Read Latency: 0 00:09:43.866 Relative Write Throughput: 0 00:09:43.866 Relative Write Latency: 0 00:09:43.866 Idle Power: Not Reported 00:09:43.866 Active Power: Not Reported 00:09:43.866 Non-Operational Permissive Mode: Not Supported 00:09:43.866 00:09:43.866 Health Information 00:09:43.866 ================== 00:09:43.866 Critical Warnings: 00:09:43.866 Available Spare Space: OK 00:09:43.866 Temperature: OK 00:09:43.866 Device Reliability: OK 00:09:43.866 Read Only: No 00:09:43.866 Volatile Memory Backup: OK 00:09:43.866 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:43.866 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:43.866 Available Spare: 0% 00:09:43.866 Available Sp[2024-07-15 10:26:32.336032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:43.866 [2024-07-15 10:26:32.342829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:43.866 [2024-07-15 10:26:32.342886] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:43.866 [2024-07-15 10:26:32.342905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.866 [2024-07-15 10:26:32.342917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.866 [2024-07-15 10:26:32.342927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.866 [2024-07-15 10:26:32.342938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.866 [2024-07-15 10:26:32.343004] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:43.866 [2024-07-15 10:26:32.343025] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:43.867 [2024-07-15 10:26:32.344004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:43.867 [2024-07-15 10:26:32.344078] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:43.867 [2024-07-15 10:26:32.344115] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:43.867 [2024-07-15 10:26:32.345017] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:43.867 [2024-07-15 10:26:32.345043] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:43.867 [2024-07-15 10:26:32.345098] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:43.867 [2024-07-15 10:26:32.346290] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:43.867 are Threshold: 0% 00:09:43.867 Life Percentage Used: 0% 00:09:43.867 Data Units Read: 0 00:09:43.867 Data Units Written: 0 00:09:43.867 Host Read Commands: 0 00:09:43.867 Host Write Commands: 0 00:09:43.867 Controller Busy Time: 0 minutes 00:09:43.867 Power Cycles: 0 00:09:43.867 Power On Hours: 0 hours 00:09:43.867 Unsafe Shutdowns: 0 00:09:43.867 Unrecoverable Media Errors: 0 00:09:43.867 Lifetime Error Log Entries: 0 00:09:43.867 Warning Temperature Time: 0 minutes 00:09:43.867 Critical Temperature Time: 0 minutes 00:09:43.867 00:09:43.867 Number of Queues 00:09:43.867 ================ 00:09:43.867 Number of I/O Submission Queues: 127 00:09:43.867 Number of I/O Completion Queues: 127 00:09:43.867 00:09:43.867 Active Namespaces 00:09:43.867 ================= 00:09:43.867 Namespace ID:1 00:09:43.867 Error Recovery Timeout: Unlimited 00:09:43.867 Command Set Identifier: NVM (00h) 00:09:43.867 Deallocate: Supported 00:09:43.867 Deallocated/Unwritten Error: Not Supported 00:09:43.867 Deallocated Read Value: Unknown 00:09:43.867 Deallocate in Write Zeroes: Not Supported 00:09:43.867 Deallocated Guard Field: 0xFFFF 00:09:43.867 Flush: Supported 00:09:43.867 Reservation: Supported 00:09:43.867 Namespace Sharing Capabilities: Multiple Controllers 00:09:43.867 Size (in LBAs): 131072 (0GiB) 00:09:43.867 Capacity (in LBAs): 131072 (0GiB) 00:09:43.867 Utilization (in LBAs): 131072 (0GiB) 00:09:43.867 NGUID: F40377090EDB4F30AC997A46A301C6D3 00:09:43.867 UUID: f4037709-0edb-4f30-ac99-7a46a301c6d3 00:09:43.867 Thin Provisioning: Not Supported 00:09:43.867 Per-NS Atomic Units: Yes 00:09:43.867 Atomic Boundary Size (Normal): 0 00:09:43.867 Atomic Boundary Size (PFail): 0 00:09:43.867 Atomic Boundary Offset: 0 00:09:43.867 Maximum Single Source Range Length: 65535 00:09:43.867 Maximum Copy Length: 65535 00:09:43.867 Maximum Source Range Count: 1 00:09:43.867 NGUID/EUI64 Never Reused: No 00:09:43.867 Namespace Write Protected: No 00:09:43.867 Number of LBA Formats: 1 00:09:43.867 Current LBA Format: LBA Format #00 00:09:43.867 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.867 00:09:43.867 10:26:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:44.124 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.124 [2024-07-15 10:26:32.577652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:49.381 Initializing NVMe Controllers 00:09:49.381 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:49.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:49.381 Initialization complete. Launching workers. 00:09:49.381 ======================================================== 00:09:49.381 Latency(us) 00:09:49.381 Device Information : IOPS MiB/s Average min max 00:09:49.381 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34548.77 134.96 3704.50 1167.60 10608.35 00:09:49.381 ======================================================== 00:09:49.381 Total : 34548.77 134.96 3704.50 1167.60 10608.35 00:09:49.381 00:09:49.381 [2024-07-15 10:26:37.682186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:49.381 10:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:49.381 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.381 [2024-07-15 10:26:37.924823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:54.655 Initializing NVMe Controllers 00:09:54.655 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:54.655 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:54.655 Initialization complete. Launching workers. 00:09:54.655 ======================================================== 00:09:54.656 Latency(us) 00:09:54.656 Device Information : IOPS MiB/s Average min max 00:09:54.656 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31876.93 124.52 4014.87 1182.02 8315.68 00:09:54.656 ======================================================== 00:09:54.656 Total : 31876.93 124.52 4014.87 1182.02 8315.68 00:09:54.656 00:09:54.656 [2024-07-15 10:26:42.947588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:54.656 10:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:54.656 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.656 [2024-07-15 10:26:43.156414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:59.922 [2024-07-15 10:26:48.280953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:59.922 Initializing NVMe Controllers 00:09:59.922 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:59.922 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:59.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:09:59.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:09:59.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:09:59.922 Initialization complete. Launching workers. 00:09:59.922 Starting thread on core 2 00:09:59.922 Starting thread on core 3 00:09:59.922 Starting thread on core 1 00:09:59.922 10:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:09:59.922 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.179 [2024-07-15 10:26:48.587615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:03.463 [2024-07-15 10:26:51.643489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:03.463 Initializing NVMe Controllers 00:10:03.463 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:03.463 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:03.463 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:03.463 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:03.463 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:03.463 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:03.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:03.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:03.463 Initialization complete. Launching workers. 00:10:03.463 Starting thread on core 1 with urgent priority queue 00:10:03.463 Starting thread on core 2 with urgent priority queue 00:10:03.463 Starting thread on core 3 with urgent priority queue 00:10:03.463 Starting thread on core 0 with urgent priority queue 00:10:03.463 SPDK bdev Controller (SPDK2 ) core 0: 5022.00 IO/s 19.91 secs/100000 ios 00:10:03.463 SPDK bdev Controller (SPDK2 ) core 1: 5282.00 IO/s 18.93 secs/100000 ios 00:10:03.463 SPDK bdev Controller (SPDK2 ) core 2: 4637.00 IO/s 21.57 secs/100000 ios 00:10:03.463 SPDK bdev Controller (SPDK2 ) core 3: 5264.00 IO/s 19.00 secs/100000 ios 00:10:03.463 ======================================================== 00:10:03.463 00:10:03.463 10:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:03.463 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.463 [2024-07-15 10:26:51.942327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:03.463 Initializing NVMe Controllers 00:10:03.463 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:03.463 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:03.463 Namespace ID: 1 size: 0GB 00:10:03.463 Initialization complete. 00:10:03.463 INFO: using host memory buffer for IO 00:10:03.463 Hello world! 00:10:03.463 [2024-07-15 10:26:51.955525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:03.463 10:26:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:03.720 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.720 [2024-07-15 10:26:52.239141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:05.093 Initializing NVMe Controllers 00:10:05.093 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:05.093 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:05.093 Initialization complete. Launching workers. 00:10:05.093 submit (in ns) avg, min, max = 8008.7, 3485.6, 4017241.1 00:10:05.093 complete (in ns) avg, min, max = 24261.5, 2040.0, 4045945.6 00:10:05.093 00:10:05.093 Submit histogram 00:10:05.093 ================ 00:10:05.093 Range in us Cumulative Count 00:10:05.093 3.484 - 3.508: 1.0938% ( 151) 00:10:05.093 3.508 - 3.532: 2.9337% ( 254) 00:10:05.093 3.532 - 3.556: 7.1930% ( 588) 00:10:05.093 3.556 - 3.579: 13.1909% ( 828) 00:10:05.093 3.579 - 3.603: 21.0503% ( 1085) 00:10:05.093 3.603 - 3.627: 28.3883% ( 1013) 00:10:05.093 3.627 - 3.650: 36.8417% ( 1167) 00:10:05.093 3.650 - 3.674: 43.2307% ( 882) 00:10:05.093 3.674 - 3.698: 50.2499% ( 969) 00:10:05.093 3.698 - 3.721: 55.6972% ( 752) 00:10:05.093 3.721 - 3.745: 60.4998% ( 663) 00:10:05.093 3.745 - 3.769: 63.9913% ( 482) 00:10:05.093 3.769 - 3.793: 67.3814% ( 468) 00:10:05.093 3.793 - 3.816: 71.0902% ( 512) 00:10:05.093 3.816 - 3.840: 74.8280% ( 516) 00:10:05.093 3.840 - 3.864: 78.4643% ( 502) 00:10:05.093 3.864 - 3.887: 81.6154% ( 435) 00:10:05.093 3.887 - 3.911: 84.6722% ( 422) 00:10:05.093 3.911 - 3.935: 87.0337% ( 326) 00:10:05.093 3.935 - 3.959: 88.8084% ( 245) 00:10:05.093 3.959 - 3.982: 90.1413% ( 184) 00:10:05.093 3.982 - 4.006: 91.3872% ( 172) 00:10:05.093 4.006 - 4.030: 92.1912% ( 111) 00:10:05.093 4.030 - 4.053: 92.8794% ( 95) 00:10:05.093 4.053 - 4.077: 93.6038% ( 100) 00:10:05.093 4.077 - 4.101: 94.1760% ( 79) 00:10:05.093 4.101 - 4.124: 94.7773% ( 83) 00:10:05.093 4.124 - 4.148: 95.1829% ( 56) 00:10:05.093 4.148 - 4.172: 95.5306% ( 48) 00:10:05.093 4.172 - 4.196: 95.8059% ( 38) 00:10:05.093 4.196 - 4.219: 95.9797% ( 24) 00:10:05.093 4.219 - 4.243: 96.1029% ( 17) 00:10:05.093 4.243 - 4.267: 96.2550% ( 21) 00:10:05.093 4.267 - 4.290: 96.3274% ( 10) 00:10:05.093 4.290 - 4.314: 96.3781% ( 7) 00:10:05.093 4.314 - 4.338: 96.4650% ( 12) 00:10:05.093 4.338 - 4.361: 96.5737% ( 15) 00:10:05.093 4.361 - 4.385: 96.6244% ( 7) 00:10:05.093 4.385 - 4.409: 96.6751% ( 7) 00:10:05.093 4.409 - 4.433: 96.7258% ( 7) 00:10:05.093 4.433 - 4.456: 96.7620% ( 5) 00:10:05.093 4.456 - 4.480: 96.8127% ( 7) 00:10:05.093 4.480 - 4.504: 96.8345% ( 3) 00:10:05.093 4.504 - 4.527: 96.8635% ( 4) 00:10:05.093 4.527 - 4.551: 96.8779% ( 2) 00:10:05.093 4.551 - 4.575: 96.8924% ( 2) 00:10:05.093 4.599 - 4.622: 96.9069% ( 2) 00:10:05.093 4.622 - 4.646: 96.9214% ( 2) 00:10:05.093 4.646 - 4.670: 96.9431% ( 3) 00:10:05.093 4.670 - 4.693: 96.9649% ( 3) 00:10:05.093 4.693 - 4.717: 96.9866% ( 3) 00:10:05.093 4.717 - 4.741: 96.9938% ( 1) 00:10:05.093 4.741 - 4.764: 97.0011% ( 1) 00:10:05.093 4.764 - 4.788: 97.0663% ( 9) 00:10:05.093 4.788 - 4.812: 97.1170% ( 7) 00:10:05.093 4.812 - 4.836: 97.1604% ( 6) 00:10:05.093 4.836 - 4.859: 97.2112% ( 7) 00:10:05.093 4.859 - 4.883: 97.2836% ( 10) 00:10:05.093 4.883 - 4.907: 97.3343% ( 7) 00:10:05.093 4.907 - 4.930: 97.3633% ( 4) 00:10:05.093 4.930 - 4.954: 97.3995% ( 5) 00:10:05.093 4.954 - 4.978: 97.4212% ( 3) 00:10:05.093 4.978 - 5.001: 97.4719% ( 7) 00:10:05.093 5.001 - 5.025: 97.5154% ( 6) 00:10:05.093 5.025 - 5.049: 97.5371% ( 3) 00:10:05.093 5.049 - 5.073: 97.5661% ( 4) 00:10:05.093 5.073 - 5.096: 97.6096% ( 6) 00:10:05.093 5.096 - 5.120: 97.6458% ( 5) 00:10:05.093 5.120 - 5.144: 97.6820% ( 5) 00:10:05.093 5.144 - 5.167: 97.7182% ( 5) 00:10:05.093 5.167 - 5.191: 97.7544% ( 5) 00:10:05.093 5.191 - 5.215: 97.7762% ( 3) 00:10:05.093 5.215 - 5.239: 97.7907% ( 2) 00:10:05.093 5.239 - 5.262: 97.7979% ( 1) 00:10:05.093 5.262 - 5.286: 97.8486% ( 7) 00:10:05.093 5.286 - 5.310: 97.8631% ( 2) 00:10:05.093 5.310 - 5.333: 97.8848% ( 3) 00:10:05.093 5.333 - 5.357: 97.8921% ( 1) 00:10:05.093 5.357 - 5.381: 97.8993% ( 1) 00:10:05.093 5.381 - 5.404: 97.9066% ( 1) 00:10:05.093 5.404 - 5.428: 97.9283% ( 3) 00:10:05.093 5.428 - 5.452: 97.9500% ( 3) 00:10:05.093 5.452 - 5.476: 97.9645% ( 2) 00:10:05.093 5.523 - 5.547: 97.9717% ( 1) 00:10:05.093 5.570 - 5.594: 97.9862% ( 2) 00:10:05.093 5.618 - 5.641: 98.0152% ( 4) 00:10:05.093 5.641 - 5.665: 98.0225% ( 1) 00:10:05.093 5.665 - 5.689: 98.0297% ( 1) 00:10:05.093 5.760 - 5.784: 98.0369% ( 1) 00:10:05.093 5.784 - 5.807: 98.0514% ( 2) 00:10:05.093 5.855 - 5.879: 98.0587% ( 1) 00:10:05.093 5.950 - 5.973: 98.0659% ( 1) 00:10:05.093 5.973 - 5.997: 98.0732% ( 1) 00:10:05.093 6.021 - 6.044: 98.0804% ( 1) 00:10:05.093 6.068 - 6.116: 98.0876% ( 1) 00:10:05.093 6.305 - 6.353: 98.1021% ( 2) 00:10:05.093 6.400 - 6.447: 98.1094% ( 1) 00:10:05.093 6.590 - 6.637: 98.1239% ( 2) 00:10:05.093 6.637 - 6.684: 98.1311% ( 1) 00:10:05.093 6.732 - 6.779: 98.1384% ( 1) 00:10:05.093 7.016 - 7.064: 98.1456% ( 1) 00:10:05.093 7.159 - 7.206: 98.1601% ( 2) 00:10:05.093 7.301 - 7.348: 98.1673% ( 1) 00:10:05.093 7.348 - 7.396: 98.1891% ( 3) 00:10:05.093 7.396 - 7.443: 98.1963% ( 1) 00:10:05.093 7.443 - 7.490: 98.2035% ( 1) 00:10:05.093 7.490 - 7.538: 98.2180% ( 2) 00:10:05.093 7.538 - 7.585: 98.2253% ( 1) 00:10:05.093 7.585 - 7.633: 98.2325% ( 1) 00:10:05.093 7.633 - 7.680: 98.2398% ( 1) 00:10:05.093 7.822 - 7.870: 98.2470% ( 1) 00:10:05.093 7.870 - 7.917: 98.2615% ( 2) 00:10:05.093 7.917 - 7.964: 98.2760% ( 2) 00:10:05.093 8.012 - 8.059: 98.2832% ( 1) 00:10:05.093 8.154 - 8.201: 98.2905% ( 1) 00:10:05.093 8.249 - 8.296: 98.3194% ( 4) 00:10:05.093 8.296 - 8.344: 98.3339% ( 2) 00:10:05.094 8.344 - 8.391: 98.3484% ( 2) 00:10:05.094 8.439 - 8.486: 98.3557% ( 1) 00:10:05.094 8.486 - 8.533: 98.3629% ( 1) 00:10:05.094 8.533 - 8.581: 98.3774% ( 2) 00:10:05.094 8.581 - 8.628: 98.3919% ( 2) 00:10:05.094 8.628 - 8.676: 98.3991% ( 1) 00:10:05.094 8.676 - 8.723: 98.4281% ( 4) 00:10:05.094 8.770 - 8.818: 98.4353% ( 1) 00:10:05.094 8.913 - 8.960: 98.4426% ( 1) 00:10:05.094 8.960 - 9.007: 98.4498% ( 1) 00:10:05.094 9.007 - 9.055: 98.4643% ( 2) 00:10:05.094 9.102 - 9.150: 98.4788% ( 2) 00:10:05.094 9.339 - 9.387: 98.4861% ( 1) 00:10:05.094 9.387 - 9.434: 98.4933% ( 1) 00:10:05.094 9.434 - 9.481: 98.5005% ( 1) 00:10:05.094 9.576 - 9.624: 98.5078% ( 1) 00:10:05.094 9.624 - 9.671: 98.5150% ( 1) 00:10:05.094 9.813 - 9.861: 98.5368% ( 3) 00:10:05.094 10.003 - 10.050: 98.5440% ( 1) 00:10:05.094 10.050 - 10.098: 98.5512% ( 1) 00:10:05.094 10.098 - 10.145: 98.5585% ( 1) 00:10:05.094 10.145 - 10.193: 98.5657% ( 1) 00:10:05.094 10.193 - 10.240: 98.5730% ( 1) 00:10:05.094 10.287 - 10.335: 98.5802% ( 1) 00:10:05.094 10.335 - 10.382: 98.5875% ( 1) 00:10:05.094 10.382 - 10.430: 98.6020% ( 2) 00:10:05.094 10.430 - 10.477: 98.6092% ( 1) 00:10:05.094 10.761 - 10.809: 98.6164% ( 1) 00:10:05.094 10.904 - 10.951: 98.6237% ( 1) 00:10:05.094 10.951 - 10.999: 98.6382% ( 2) 00:10:05.094 10.999 - 11.046: 98.6454% ( 1) 00:10:05.094 11.046 - 11.093: 98.6527% ( 1) 00:10:05.094 11.093 - 11.141: 98.6599% ( 1) 00:10:05.094 11.188 - 11.236: 98.6671% ( 1) 00:10:05.094 11.236 - 11.283: 98.6744% ( 1) 00:10:05.094 11.283 - 11.330: 98.6816% ( 1) 00:10:05.094 11.378 - 11.425: 98.6961% ( 2) 00:10:05.094 11.473 - 11.520: 98.7034% ( 1) 00:10:05.094 11.520 - 11.567: 98.7106% ( 1) 00:10:05.094 11.852 - 11.899: 98.7179% ( 1) 00:10:05.094 12.041 - 12.089: 98.7323% ( 2) 00:10:05.094 12.136 - 12.231: 98.7396% ( 1) 00:10:05.094 12.231 - 12.326: 98.7468% ( 1) 00:10:05.094 12.326 - 12.421: 98.7613% ( 2) 00:10:05.094 12.800 - 12.895: 98.7758% ( 2) 00:10:05.094 13.084 - 13.179: 98.7903% ( 2) 00:10:05.094 13.179 - 13.274: 98.7975% ( 1) 00:10:05.094 13.369 - 13.464: 98.8048% ( 1) 00:10:05.094 13.559 - 13.653: 98.8193% ( 2) 00:10:05.094 13.843 - 13.938: 98.8265% ( 1) 00:10:05.094 13.938 - 14.033: 98.8410% ( 2) 00:10:05.094 14.033 - 14.127: 98.8482% ( 1) 00:10:05.094 14.222 - 14.317: 98.8555% ( 1) 00:10:05.094 14.412 - 14.507: 98.8627% ( 1) 00:10:05.094 14.507 - 14.601: 98.8700% ( 1) 00:10:05.094 14.601 - 14.696: 98.8772% ( 1) 00:10:05.094 14.696 - 14.791: 98.8917% ( 2) 00:10:05.094 14.886 - 14.981: 98.9134% ( 3) 00:10:05.094 14.981 - 15.076: 98.9207% ( 1) 00:10:05.094 15.265 - 15.360: 98.9279% ( 1) 00:10:05.094 15.455 - 15.550: 98.9352% ( 1) 00:10:05.094 16.877 - 16.972: 98.9424% ( 1) 00:10:05.094 16.972 - 17.067: 98.9497% ( 1) 00:10:05.094 17.067 - 17.161: 98.9569% ( 1) 00:10:05.094 17.161 - 17.256: 98.9641% ( 1) 00:10:05.094 17.256 - 17.351: 98.9786% ( 2) 00:10:05.094 17.446 - 17.541: 99.0076% ( 4) 00:10:05.094 17.541 - 17.636: 99.0438% ( 5) 00:10:05.094 17.636 - 17.730: 99.0873% ( 6) 00:10:05.094 17.730 - 17.825: 99.1090% ( 3) 00:10:05.094 17.825 - 17.920: 99.1380% ( 4) 00:10:05.094 17.920 - 18.015: 99.1525% ( 2) 00:10:05.094 18.015 - 18.110: 99.1959% ( 6) 00:10:05.094 18.110 - 18.204: 99.2756% ( 11) 00:10:05.094 18.204 - 18.299: 99.3481% ( 10) 00:10:05.094 18.299 - 18.394: 99.4133% ( 9) 00:10:05.094 18.394 - 18.489: 99.4784% ( 9) 00:10:05.094 18.489 - 18.584: 99.5292% ( 7) 00:10:05.094 18.584 - 18.679: 99.5871% ( 8) 00:10:05.094 18.679 - 18.773: 99.6306% ( 6) 00:10:05.094 18.773 - 18.868: 99.6740% ( 6) 00:10:05.094 18.868 - 18.963: 99.7175% ( 6) 00:10:05.094 18.963 - 19.058: 99.7320% ( 2) 00:10:05.094 19.058 - 19.153: 99.7465% ( 2) 00:10:05.094 19.153 - 19.247: 99.7610% ( 2) 00:10:05.094 19.247 - 19.342: 99.7682% ( 1) 00:10:05.094 19.342 - 19.437: 99.7754% ( 1) 00:10:05.094 19.532 - 19.627: 99.7827% ( 1) 00:10:05.094 19.911 - 20.006: 99.7972% ( 2) 00:10:05.094 20.101 - 20.196: 99.8044% ( 1) 00:10:05.094 21.144 - 21.239: 99.8117% ( 1) 00:10:05.094 21.997 - 22.092: 99.8189% ( 1) 00:10:05.094 23.135 - 23.230: 99.8334% ( 2) 00:10:05.094 23.324 - 23.419: 99.8406% ( 1) 00:10:05.094 24.462 - 24.652: 99.8479% ( 1) 00:10:05.094 26.927 - 27.117: 99.8551% ( 1) 00:10:05.094 28.065 - 28.255: 99.8624% ( 1) 00:10:05.094 29.013 - 29.203: 99.8769% ( 2) 00:10:05.094 32.996 - 33.185: 99.8841% ( 1) 00:10:05.094 35.271 - 35.461: 99.8913% ( 1) 00:10:05.094 38.684 - 38.874: 99.8986% ( 1) 00:10:05.094 3980.705 - 4004.978: 99.9710% ( 10) 00:10:05.094 4004.978 - 4029.250: 100.0000% ( 4) 00:10:05.094 00:10:05.094 Complete histogram 00:10:05.094 ================== 00:10:05.094 Range in us Cumulative Count 00:10:05.094 2.039 - 2.050: 14.4947% ( 2001) 00:10:05.094 2.050 - 2.062: 42.4122% ( 3854) 00:10:05.094 2.062 - 2.074: 44.3390% ( 266) 00:10:05.094 2.074 - 2.086: 54.7917% ( 1443) 00:10:05.094 2.086 - 2.098: 61.6733% ( 950) 00:10:05.094 2.098 - 2.110: 63.4842% ( 250) 00:10:05.094 2.110 - 2.121: 72.8504% ( 1293) 00:10:05.094 2.121 - 2.133: 76.7838% ( 543) 00:10:05.094 2.133 - 2.145: 77.5299% ( 103) 00:10:05.094 2.145 - 2.157: 81.1952% ( 506) 00:10:05.094 2.157 - 2.169: 82.8468% ( 228) 00:10:05.094 2.169 - 2.181: 83.6219% ( 107) 00:10:05.094 2.181 - 2.193: 86.8671% ( 448) 00:10:05.094 2.193 - 2.204: 88.8446% ( 273) 00:10:05.094 2.204 - 2.216: 90.5397% ( 234) 00:10:05.094 2.216 - 2.228: 92.6693% ( 294) 00:10:05.094 2.228 - 2.240: 93.3720% ( 97) 00:10:05.094 2.240 - 2.252: 93.7704% ( 55) 00:10:05.094 2.252 - 2.264: 94.0529% ( 39) 00:10:05.094 2.264 - 2.276: 94.4151% ( 50) 00:10:05.094 2.276 - 2.287: 95.0598% ( 89) 00:10:05.094 2.287 - 2.299: 95.1757% ( 16) 00:10:05.094 2.299 - 2.311: 95.2336% ( 8) 00:10:05.094 2.311 - 2.323: 95.3205% ( 12) 00:10:05.094 2.323 - 2.335: 95.3930% ( 10) 00:10:05.094 2.335 - 2.347: 95.4364% ( 6) 00:10:05.094 2.347 - 2.359: 95.7407% ( 42) 00:10:05.094 2.359 - 2.370: 96.1173% ( 52) 00:10:05.094 2.370 - 2.382: 96.2767% ( 22) 00:10:05.094 2.382 - 2.394: 96.4868% ( 29) 00:10:05.094 2.394 - 2.406: 96.7113% ( 31) 00:10:05.094 2.406 - 2.418: 96.9359% ( 31) 00:10:05.094 2.418 - 2.430: 97.0735% ( 19) 00:10:05.094 2.430 - 2.441: 97.2329% ( 22) 00:10:05.094 2.441 - 2.453: 97.4067% ( 24) 00:10:05.094 2.453 - 2.465: 97.5516% ( 20) 00:10:05.094 2.465 - 2.477: 97.6675% ( 16) 00:10:05.094 2.477 - 2.489: 97.7544% ( 12) 00:10:05.094 2.489 - 2.501: 97.7979% ( 6) 00:10:05.094 2.501 - 2.513: 97.8414% ( 6) 00:10:05.094 2.513 - 2.524: 97.8848% ( 6) 00:10:05.094 2.524 - 2.536: 97.9210% ( 5) 00:10:05.094 2.536 - 2.548: 97.9428% ( 3) 00:10:05.094 2.548 - 2.560: 97.9790% ( 5) 00:10:05.094 2.560 - 2.572: 97.9935% ( 2) 00:10:05.094 2.572 - 2.584: 98.0225% ( 4) 00:10:05.094 2.584 - 2.596: 98.0369% ( 2) 00:10:05.094 2.596 - 2.607: 98.0587% ( 3) 00:10:05.094 2.607 - 2.619: 98.0659% ( 1) 00:10:05.094 2.643 - 2.655: 98.0876% ( 3) 00:10:05.094 2.655 - 2.667: 98.1021% ( 2) 00:10:05.094 2.667 - 2.679: 98.1094% ( 1) 00:10:05.094 2.679 - 2.690: 98.1239% ( 2) 00:10:05.094 2.690 - 2.702: 98.1311% ( 1) 00:10:05.094 2.702 - 2.714: 98.1528% ( 3) 00:10:05.094 2.714 - 2.726: 98.1601% ( 1) 00:10:05.094 2.726 - 2.738: 98.1673% ( 1) 00:10:05.094 2.738 - 2.750: 98.1746% ( 1) 00:10:05.094 2.761 - 2.773: 98.1818% ( 1) 00:10:05.094 2.773 - 2.785: 98.2035% ( 3) 00:10:05.094 2.785 - 2.797: 98.2108% ( 1) 00:10:05.094 2.797 - 2.809: 98.2180% ( 1) 00:10:05.094 2.809 - 2.821: 98.2253% ( 1) 00:10:05.094 2.833 - 2.844: 98.2325% ( 1) 00:10:05.094 2.844 - 2.856: 98.2398% ( 1) 00:10:05.094 2.868 - 2.880: 98.2470% ( 1) 00:10:05.094 2.892 - 2.904: 98.2543% ( 1) 00:10:05.094 2.904 - 2.916: 98.2687% ( 2) 00:10:05.094 2.916 - 2.927: 98.2760% ( 1) 00:10:05.094 2.939 - 2.951: 98.2832% ( 1) 00:10:05.094 2.951 - 2.963: 98.2977% ( 2) 00:10:05.094 2.963 - 2.975: 98.3050% ( 1) 00:10:05.094 2.987 - 2.999: 98.3122% ( 1) 00:10:05.094 3.010 - 3.022: 98.3194% ( 1) 00:10:05.094 3.034 - 3.058: 98.3267% ( 1) 00:10:05.094 3.058 - 3.081: 98.3557% ( 4) 00:10:05.094 3.081 - 3.105: 98.3629% ( 1) 00:10:05.094 3.105 - 3.129: 98.3846% ( 3) 00:10:05.094 3.129 - 3.153: 98.3991% ( 2) 00:10:05.094 3.153 - 3.176: 98.4064% ( 1) 00:10:05.094 3.200 - 3.224: 98.4136% ( 1) 00:10:05.094 3.224 - 3.247: 98.4209% ( 1) 00:10:05.094 3.247 - 3.271: 98.4353% ( 2) 00:10:05.094 3.271 - 3.295: 98.4498% ( 2) 00:10:05.094 3.366 - 3.390: 98.4571% ( 1) 00:10:05.094 3.413 - 3.437: 98.4643% ( 1) 00:10:05.094 3.437 - 3.461: 98.4788% ( 2) 00:10:05.094 3.461 - 3.484: 98.5005% ( 3) 00:10:05.094 3.508 - 3.532: 98.5078% ( 1) 00:10:05.094 3.532 - 3.556: 98.5150% ( 1) 00:10:05.094 3.556 - 3.579: 98.5223% ( 1) 00:10:05.094 3.603 - 3.627: 98.5368% ( 2) 00:10:05.094 3.698 - 3.721: 98.5512% ( 2) 00:10:05.094 3.745 - 3.769: 98.5585% ( 1) 00:10:05.094 3.769 - 3.793: 98.5802% ( 3) 00:10:05.094 3.793 - 3.816: 98.5875% ( 1) 00:10:05.094 3.887 - 3.911: 98.5947% ( 1) 00:10:05.094 3.935 - 3.959: 98.6020% ( 1) 00:10:05.094 4.077 - 4.101: 98.6164% ( 2) 00:10:05.094 4.101 - 4.124: 98.6309% ( 2) 00:10:05.094 5.879 - 5.902: 98.6382% ( 1) 00:10:05.094 5.902 - 5.926: 98.6454% ( 1) 00:10:05.094 5.973 - 5.997: 98.6527% ( 1) 00:10:05.094 6.044 - 6.068: 98.6599% ( 1) 00:10:05.094 6.116 - 6.163: 98.6671% ( 1) 00:10:05.094 6.353 - 6.400: 98.6744% ( 1) 00:10:05.094 6.590 - 6.637: 98.6816% ( 1) 00:10:05.094 6.637 - 6.684: 98.6889% ( 1) 00:10:05.094 7.111 - 7.159: 98.7034% ( 2) 00:10:05.094 7.206 - 7.253: 98.7106% ( 1) 00:10:05.094 7.490 - 7.538: 98.7179% ( 1) 00:10:05.094 7.538 - 7.585: 98.7251% ( 1) 00:10:05.094 7.680 - 7.727: 98.7323% ( 1) 00:10:05.094 7.917 - 7.964: 98.7396% ( 1) 00:10:05.094 7.964 - 8.012: 98.7468% ( 1) 00:10:05.094 8.012 - 8.059: 98.7541% ( 1) 00:10:05.094 8.059 - 8.107: 98.7613% ( 1) 00:10:05.094 9.150 - 9.197: 98.7686% ( 1) 00:10:05.094 9.292 - 9.339: 98.7758%[2024-07-15 10:26:53.338681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:05.094 ( 1) 00:10:05.094 11.520 - 11.567: 98.7830% ( 1) 00:10:05.094 15.360 - 15.455: 98.7903% ( 1) 00:10:05.094 15.550 - 15.644: 98.8048% ( 2) 00:10:05.094 15.644 - 15.739: 98.8265% ( 3) 00:10:05.094 15.739 - 15.834: 98.8627% ( 5) 00:10:05.094 15.834 - 15.929: 98.8917% ( 4) 00:10:05.094 15.929 - 16.024: 98.9279% ( 5) 00:10:05.094 16.024 - 16.119: 98.9641% ( 5) 00:10:05.094 16.119 - 16.213: 98.9859% ( 3) 00:10:05.094 16.213 - 16.308: 99.0293% ( 6) 00:10:05.094 16.308 - 16.403: 99.0438% ( 2) 00:10:05.094 16.403 - 16.498: 99.0583% ( 2) 00:10:05.094 16.498 - 16.593: 99.1018% ( 6) 00:10:05.094 16.593 - 16.687: 99.1452% ( 6) 00:10:05.094 16.687 - 16.782: 99.2177% ( 10) 00:10:05.094 16.782 - 16.877: 99.2466% ( 4) 00:10:05.094 16.877 - 16.972: 99.2684% ( 3) 00:10:05.094 16.972 - 17.067: 99.2829% ( 2) 00:10:05.094 17.161 - 17.256: 99.2974% ( 2) 00:10:05.094 17.256 - 17.351: 99.3191% ( 3) 00:10:05.094 17.351 - 17.446: 99.3336% ( 2) 00:10:05.094 17.446 - 17.541: 99.3408% ( 1) 00:10:05.094 17.541 - 17.636: 99.3481% ( 1) 00:10:05.094 17.636 - 17.730: 99.3625% ( 2) 00:10:05.094 17.730 - 17.825: 99.3698% ( 1) 00:10:05.094 17.825 - 17.920: 99.3915% ( 3) 00:10:05.094 17.920 - 18.015: 99.3988% ( 1) 00:10:05.094 18.204 - 18.299: 99.4133% ( 2) 00:10:05.094 18.394 - 18.489: 99.4205% ( 1) 00:10:05.094 19.342 - 19.437: 99.4277% ( 1) 00:10:05.094 21.807 - 21.902: 99.4350% ( 1) 00:10:05.094 22.850 - 22.945: 99.4422% ( 1) 00:10:05.094 23.419 - 23.514: 99.4495% ( 1) 00:10:05.094 3980.705 - 4004.978: 99.7972% ( 48) 00:10:05.094 4004.978 - 4029.250: 99.9855% ( 26) 00:10:05.094 4029.250 - 4053.523: 100.0000% ( 2) 00:10:05.094 00:10:05.094 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:05.094 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:05.094 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:05.094 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:05.094 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:05.351 [ 00:10:05.351 { 00:10:05.351 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:05.351 "subtype": "Discovery", 00:10:05.351 "listen_addresses": [], 00:10:05.351 "allow_any_host": true, 00:10:05.351 "hosts": [] 00:10:05.351 }, 00:10:05.351 { 00:10:05.351 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:05.351 "subtype": "NVMe", 00:10:05.351 "listen_addresses": [ 00:10:05.351 { 00:10:05.351 "trtype": "VFIOUSER", 00:10:05.351 "adrfam": "IPv4", 00:10:05.351 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:05.351 "trsvcid": "0" 00:10:05.351 } 00:10:05.351 ], 00:10:05.351 "allow_any_host": true, 00:10:05.351 "hosts": [], 00:10:05.351 "serial_number": "SPDK1", 00:10:05.351 "model_number": "SPDK bdev Controller", 00:10:05.351 "max_namespaces": 32, 00:10:05.351 "min_cntlid": 1, 00:10:05.351 "max_cntlid": 65519, 00:10:05.351 "namespaces": [ 00:10:05.351 { 00:10:05.351 "nsid": 1, 00:10:05.351 "bdev_name": "Malloc1", 00:10:05.351 "name": "Malloc1", 00:10:05.351 "nguid": "1F8A2149E9BE4D239B689EE89816D81F", 00:10:05.351 "uuid": "1f8a2149-e9be-4d23-9b68-9ee89816d81f" 00:10:05.351 }, 00:10:05.351 { 00:10:05.351 "nsid": 2, 00:10:05.351 "bdev_name": "Malloc3", 00:10:05.351 "name": "Malloc3", 00:10:05.351 "nguid": "BD3B8CA6A1294311B7766AEDFABD7140", 00:10:05.351 "uuid": "bd3b8ca6-a129-4311-b776-6aedfabd7140" 00:10:05.351 } 00:10:05.351 ] 00:10:05.351 }, 00:10:05.351 { 00:10:05.351 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:05.351 "subtype": "NVMe", 00:10:05.351 "listen_addresses": [ 00:10:05.351 { 00:10:05.351 "trtype": "VFIOUSER", 00:10:05.351 "adrfam": "IPv4", 00:10:05.351 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:05.351 "trsvcid": "0" 00:10:05.351 } 00:10:05.351 ], 00:10:05.351 "allow_any_host": true, 00:10:05.351 "hosts": [], 00:10:05.351 "serial_number": "SPDK2", 00:10:05.351 "model_number": "SPDK bdev Controller", 00:10:05.351 "max_namespaces": 32, 00:10:05.351 "min_cntlid": 1, 00:10:05.351 "max_cntlid": 65519, 00:10:05.351 "namespaces": [ 00:10:05.351 { 00:10:05.351 "nsid": 1, 00:10:05.351 "bdev_name": "Malloc2", 00:10:05.351 "name": "Malloc2", 00:10:05.351 "nguid": "F40377090EDB4F30AC997A46A301C6D3", 00:10:05.351 "uuid": "f4037709-0edb-4f30-ac99-7a46a301c6d3" 00:10:05.351 } 00:10:05.351 ] 00:10:05.351 } 00:10:05.351 ] 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1149272 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:05.351 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:05.351 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.351 [2024-07-15 10:26:53.851306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:05.609 Malloc4 00:10:05.609 10:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:05.865 [2024-07-15 10:26:54.224995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:05.865 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:05.865 Asynchronous Event Request test 00:10:05.865 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:05.865 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:05.865 Registering asynchronous event callbacks... 00:10:05.865 Starting namespace attribute notice tests for all controllers... 00:10:05.865 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:05.865 aer_cb - Changed Namespace 00:10:05.865 Cleaning up... 00:10:06.123 [ 00:10:06.123 { 00:10:06.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:06.123 "subtype": "Discovery", 00:10:06.123 "listen_addresses": [], 00:10:06.123 "allow_any_host": true, 00:10:06.124 "hosts": [] 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:06.124 "subtype": "NVMe", 00:10:06.124 "listen_addresses": [ 00:10:06.124 { 00:10:06.124 "trtype": "VFIOUSER", 00:10:06.124 "adrfam": "IPv4", 00:10:06.124 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:06.124 "trsvcid": "0" 00:10:06.124 } 00:10:06.124 ], 00:10:06.124 "allow_any_host": true, 00:10:06.124 "hosts": [], 00:10:06.124 "serial_number": "SPDK1", 00:10:06.124 "model_number": "SPDK bdev Controller", 00:10:06.124 "max_namespaces": 32, 00:10:06.124 "min_cntlid": 1, 00:10:06.124 "max_cntlid": 65519, 00:10:06.124 "namespaces": [ 00:10:06.124 { 00:10:06.124 "nsid": 1, 00:10:06.124 "bdev_name": "Malloc1", 00:10:06.124 "name": "Malloc1", 00:10:06.124 "nguid": "1F8A2149E9BE4D239B689EE89816D81F", 00:10:06.124 "uuid": "1f8a2149-e9be-4d23-9b68-9ee89816d81f" 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "nsid": 2, 00:10:06.124 "bdev_name": "Malloc3", 00:10:06.124 "name": "Malloc3", 00:10:06.124 "nguid": "BD3B8CA6A1294311B7766AEDFABD7140", 00:10:06.124 "uuid": "bd3b8ca6-a129-4311-b776-6aedfabd7140" 00:10:06.124 } 00:10:06.124 ] 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:06.124 "subtype": "NVMe", 00:10:06.124 "listen_addresses": [ 00:10:06.124 { 00:10:06.124 "trtype": "VFIOUSER", 00:10:06.124 "adrfam": "IPv4", 00:10:06.124 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:06.124 "trsvcid": "0" 00:10:06.124 } 00:10:06.124 ], 00:10:06.124 "allow_any_host": true, 00:10:06.124 "hosts": [], 00:10:06.124 "serial_number": "SPDK2", 00:10:06.124 "model_number": "SPDK bdev Controller", 00:10:06.124 "max_namespaces": 32, 00:10:06.124 "min_cntlid": 1, 00:10:06.124 "max_cntlid": 65519, 00:10:06.124 "namespaces": [ 00:10:06.124 { 00:10:06.124 "nsid": 1, 00:10:06.124 "bdev_name": "Malloc2", 00:10:06.124 "name": "Malloc2", 00:10:06.124 "nguid": "F40377090EDB4F30AC997A46A301C6D3", 00:10:06.124 "uuid": "f4037709-0edb-4f30-ac99-7a46a301c6d3" 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "nsid": 2, 00:10:06.124 "bdev_name": "Malloc4", 00:10:06.124 "name": "Malloc4", 00:10:06.124 "nguid": "44327CF55D504EEB88DCD87E3C183086", 00:10:06.124 "uuid": "44327cf5-5d50-4eeb-88dc-d87e3c183086" 00:10:06.124 } 00:10:06.124 ] 00:10:06.124 } 00:10:06.124 ] 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1149272 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1143709 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1143709 ']' 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1143709 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143709 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143709' 00:10:06.124 killing process with pid 1143709 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1143709 00:10:06.124 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1143709 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1149455 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1149455' 00:10:06.384 Process pid: 1149455 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1149455 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1149455 ']' 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.384 10:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:06.643 [2024-07-15 10:26:54.934154] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:06.643 [2024-07-15 10:26:54.935178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:06.643 [2024-07-15 10:26:54.935263] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.643 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.643 [2024-07-15 10:26:54.992920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.643 [2024-07-15 10:26:55.091125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.643 [2024-07-15 10:26:55.091177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.643 [2024-07-15 10:26:55.091201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.643 [2024-07-15 10:26:55.091211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.643 [2024-07-15 10:26:55.091220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.643 [2024-07-15 10:26:55.091297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.644 [2024-07-15 10:26:55.091362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.644 [2024-07-15 10:26:55.091430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.644 [2024-07-15 10:26:55.091433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.644 [2024-07-15 10:26:55.185720] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:06.644 [2024-07-15 10:26:55.185953] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:06.644 [2024-07-15 10:26:55.186201] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:06.644 [2024-07-15 10:26:55.186833] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:06.644 [2024-07-15 10:26:55.187063] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:06.901 10:26:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.901 10:26:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:06.901 10:26:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:07.834 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:08.093 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:08.093 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:08.093 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:08.093 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:08.093 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:08.352 Malloc1 00:10:08.352 10:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:08.610 10:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:08.867 10:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:09.125 10:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:09.125 10:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:09.125 10:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:09.383 Malloc2 00:10:09.383 10:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:09.640 10:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:09.897 10:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1149455 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1149455 ']' 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1149455 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149455 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149455' 00:10:10.154 killing process with pid 1149455 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1149455 00:10:10.154 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1149455 00:10:10.411 10:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:10.411 10:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:10.411 00:10:10.411 real 0m52.626s 00:10:10.411 user 3m27.695s 00:10:10.411 sys 0m4.466s 00:10:10.411 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.411 10:26:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:10.411 ************************************ 00:10:10.411 END TEST nvmf_vfio_user 00:10:10.411 ************************************ 00:10:10.411 10:26:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:10.411 10:26:58 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:10.411 10:26:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:10.411 10:26:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.412 10:26:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 ************************************ 00:10:10.670 START TEST nvmf_vfio_user_nvme_compliance 00:10:10.670 ************************************ 00:10:10.670 10:26:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:10.670 * Looking for test storage... 00:10:10.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1149936 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1149936' 00:10:10.670 Process pid: 1149936 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1149936 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1149936 ']' 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.670 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 [2024-07-15 10:26:59.087582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:10.670 [2024-07-15 10:26:59.087676] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.670 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.670 [2024-07-15 10:26:59.145575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.945 [2024-07-15 10:26:59.254524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.945 [2024-07-15 10:26:59.254587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.945 [2024-07-15 10:26:59.254611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.945 [2024-07-15 10:26:59.254622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.945 [2024-07-15 10:26:59.254632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.945 [2024-07-15 10:26:59.254719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.945 [2024-07-15 10:26:59.254784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.945 [2024-07-15 10:26:59.255454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.945 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.945 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:10.945 10:26:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 malloc0 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.880 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:12.136 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.136 10:27:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:12.136 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.136 00:10:12.136 00:10:12.136 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.136 http://cunit.sourceforge.net/ 00:10:12.136 00:10:12.136 00:10:12.136 Suite: nvme_compliance 00:10:12.137 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 10:27:00.591313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.137 [2024-07-15 10:27:00.592775] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:12.137 [2024-07-15 10:27:00.592806] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:12.137 [2024-07-15 10:27:00.592836] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:12.137 [2024-07-15 10:27:00.594332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.137 passed 00:10:12.137 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 10:27:00.681960] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.137 [2024-07-15 10:27:00.684984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.393 passed 00:10:12.393 Test: admin_identify_ns ...[2024-07-15 10:27:00.771402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.393 [2024-07-15 10:27:00.826835] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:12.393 [2024-07-15 10:27:00.834821] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:12.393 [2024-07-15 10:27:00.855961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.393 passed 00:10:12.393 Test: admin_get_features_mandatory_features ...[2024-07-15 10:27:00.938068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.393 [2024-07-15 10:27:00.943101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.649 passed 00:10:12.650 Test: admin_get_features_optional_features ...[2024-07-15 10:27:01.027655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.650 [2024-07-15 10:27:01.030675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.650 passed 00:10:12.650 Test: admin_set_features_number_of_queues ...[2024-07-15 10:27:01.113926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.906 [2024-07-15 10:27:01.219904] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.906 passed 00:10:12.906 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 10:27:01.300638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.906 [2024-07-15 10:27:01.305671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.906 passed 00:10:12.906 Test: admin_get_log_page_with_lpo ...[2024-07-15 10:27:01.388383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.163 [2024-07-15 10:27:01.455837] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:13.163 [2024-07-15 10:27:01.468909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.163 passed 00:10:13.163 Test: fabric_property_get ...[2024-07-15 10:27:01.550209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.163 [2024-07-15 10:27:01.551498] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:13.163 [2024-07-15 10:27:01.553229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.163 passed 00:10:13.163 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 10:27:01.638778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.163 [2024-07-15 10:27:01.640080] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:13.163 [2024-07-15 10:27:01.641818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.163 passed 00:10:13.420 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 10:27:01.724070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.420 [2024-07-15 10:27:01.810828] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:13.420 [2024-07-15 10:27:01.826815] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:13.420 [2024-07-15 10:27:01.831921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.420 passed 00:10:13.420 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 10:27:01.914543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.420 [2024-07-15 10:27:01.915828] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:13.421 [2024-07-15 10:27:01.917561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.421 passed 00:10:13.677 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 10:27:01.998751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.677 [2024-07-15 10:27:02.075830] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:13.677 [2024-07-15 10:27:02.099812] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:13.677 [2024-07-15 10:27:02.104922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.677 passed 00:10:13.677 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 10:27:02.188557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.677 [2024-07-15 10:27:02.189857] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:13.677 [2024-07-15 10:27:02.189893] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:13.677 [2024-07-15 10:27:02.191580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.677 passed 00:10:13.934 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 10:27:02.273421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:13.934 [2024-07-15 10:27:02.364825] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:13.934 [2024-07-15 10:27:02.372812] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:13.934 [2024-07-15 10:27:02.380810] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:13.934 [2024-07-15 10:27:02.388831] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:13.934 [2024-07-15 10:27:02.417928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:13.934 passed 00:10:14.192 Test: admin_create_io_sq_verify_pc ...[2024-07-15 10:27:02.501510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:14.192 [2024-07-15 10:27:02.517841] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:14.192 [2024-07-15 10:27:02.535838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:14.192 passed 00:10:14.192 Test: admin_create_io_qp_max_qps ...[2024-07-15 10:27:02.622419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:15.563 [2024-07-15 10:27:03.727835] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:15.820 [2024-07-15 10:27:04.116768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:15.820 passed 00:10:15.820 Test: admin_create_io_sq_shared_cq ...[2024-07-15 10:27:04.201081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:15.820 [2024-07-15 10:27:04.332823] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:15.820 [2024-07-15 10:27:04.369912] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:16.078 passed 00:10:16.078 00:10:16.078 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.078 suites 1 1 n/a 0 0 00:10:16.078 tests 18 18 18 0 0 00:10:16.078 asserts 360 360 360 0 n/a 00:10:16.078 00:10:16.078 Elapsed time = 1.565 seconds 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1149936 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1149936 ']' 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1149936 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149936 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149936' 00:10:16.078 killing process with pid 1149936 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1149936 00:10:16.078 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1149936 00:10:16.336 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:16.336 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:16.336 00:10:16.336 real 0m5.777s 00:10:16.336 user 0m16.210s 00:10:16.336 sys 0m0.547s 00:10:16.336 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.336 10:27:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:16.336 ************************************ 00:10:16.336 END TEST nvmf_vfio_user_nvme_compliance 00:10:16.336 ************************************ 00:10:16.336 10:27:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:16.336 10:27:04 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:16.336 10:27:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:16.336 10:27:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.336 10:27:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.336 ************************************ 00:10:16.337 START TEST nvmf_vfio_user_fuzz 00:10:16.337 ************************************ 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:16.337 * Looking for test storage... 00:10:16.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1150805 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1150805' 00:10:16.337 Process pid: 1150805 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1150805 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1150805 ']' 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.337 10:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:16.904 10:27:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.904 10:27:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:16.904 10:27:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 malloc0 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:17.837 10:27:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:49.895 Fuzzing completed. Shutting down the fuzz application 00:10:49.895 00:10:49.895 Dumping successful admin opcodes: 00:10:49.895 8, 9, 10, 24, 00:10:49.895 Dumping successful io opcodes: 00:10:49.895 0, 00:10:49.895 NS: 0x200003a1ef00 I/O qp, Total commands completed: 657248, total successful commands: 2558, random_seed: 246151744 00:10:49.895 NS: 0x200003a1ef00 admin qp, Total commands completed: 84502, total successful commands: 672, random_seed: 3146015040 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1150805 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1150805 ']' 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1150805 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1150805 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1150805' 00:10:49.895 killing process with pid 1150805 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1150805 00:10:49.895 10:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1150805 00:10:49.895 10:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:49.895 10:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:49.895 00:10:49.895 real 0m32.281s 00:10:49.895 user 0m33.492s 00:10:49.895 sys 0m25.489s 00:10:49.895 10:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.895 10:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 ************************************ 00:10:49.895 END TEST nvmf_vfio_user_fuzz 00:10:49.895 ************************************ 00:10:49.895 10:27:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:49.895 10:27:37 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:49.895 10:27:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:49.895 10:27:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.895 10:27:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:49.896 ************************************ 00:10:49.896 START TEST nvmf_host_management 00:10:49.896 ************************************ 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:49.896 * Looking for test storage... 00:10:49.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.896 10:27:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:50.835 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:50.835 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:50.835 Found net devices under 0000:09:00.0: cvl_0_0 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.835 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:50.836 Found net devices under 0000:09:00.1: cvl_0_1 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.836 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:10:51.094 00:10:51.094 --- 10.0.0.2 ping statistics --- 00:10:51.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.094 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:10:51.094 00:10:51.094 --- 10.0.0.1 ping statistics --- 00:10:51.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.094 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.094 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1156729 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1156729 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1156729 ']' 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.095 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.095 [2024-07-15 10:27:39.498608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:51.095 [2024-07-15 10:27:39.498691] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.095 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.095 [2024-07-15 10:27:39.560753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.353 [2024-07-15 10:27:39.668444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.353 [2024-07-15 10:27:39.668490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.353 [2024-07-15 10:27:39.668517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.353 [2024-07-15 10:27:39.668531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.353 [2024-07-15 10:27:39.668540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.353 [2024-07-15 10:27:39.668619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.353 [2024-07-15 10:27:39.668722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.353 [2024-07-15 10:27:39.668773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.353 [2024-07-15 10:27:39.668776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.353 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.353 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:51.353 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.353 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.354 [2024-07-15 10:27:39.810426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.354 Malloc0 00:10:51.354 [2024-07-15 10:27:39.868757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1156776 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1156776 /var/tmp/bdevperf.sock 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1156776 ']' 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:51.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:51.354 { 00:10:51.354 "params": { 00:10:51.354 "name": "Nvme$subsystem", 00:10:51.354 "trtype": "$TEST_TRANSPORT", 00:10:51.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.354 "adrfam": "ipv4", 00:10:51.354 "trsvcid": "$NVMF_PORT", 00:10:51.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.354 "hdgst": ${hdgst:-false}, 00:10:51.354 "ddgst": ${ddgst:-false} 00:10:51.354 }, 00:10:51.354 "method": "bdev_nvme_attach_controller" 00:10:51.354 } 00:10:51.354 EOF 00:10:51.354 )") 00:10:51.354 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:51.612 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:51.612 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:51.612 10:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:51.612 "params": { 00:10:51.612 "name": "Nvme0", 00:10:51.612 "trtype": "tcp", 00:10:51.612 "traddr": "10.0.0.2", 00:10:51.612 "adrfam": "ipv4", 00:10:51.612 "trsvcid": "4420", 00:10:51.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:51.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:51.612 "hdgst": false, 00:10:51.612 "ddgst": false 00:10:51.612 }, 00:10:51.612 "method": "bdev_nvme_attach_controller" 00:10:51.612 }' 00:10:51.612 [2024-07-15 10:27:39.939505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:51.612 [2024-07-15 10:27:39.939581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156776 ] 00:10:51.612 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.612 [2024-07-15 10:27:40.001094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.612 [2024-07-15 10:27:40.117272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.870 Running I/O for 10 seconds... 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:51.870 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.129 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.388 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.388 [2024-07-15 10:27:40.707581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.707978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.707995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.708010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.708026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.388 [2024-07-15 10:27:40.708042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.388 [2024-07-15 10:27:40.708058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.708981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.708996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.389 [2024-07-15 10:27:40.709391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.389 [2024-07-15 10:27:40.709406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:52.390 [2024-07-15 10:27:40.709691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.390 [2024-07-15 10:27:40.709707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bae900 is same with the state(5) to be set 00:10:52.390 [2024-07-15 10:27:40.709789] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bae900 was disconnected and freed. reset controller. 00:10:52.390 [2024-07-15 10:27:40.710993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:52.390 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.390 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:52.390 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.390 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.390 task offset: 89472 on job bdev=Nvme0n1 fails 00:10:52.390 00:10:52.390 Latency(us) 00:10:52.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.390 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:52.390 Job: Nvme0n1 ended in about 0.41 seconds with error 00:10:52.390 Verification LBA range: start 0x0 length 0x400 00:10:52.390 Nvme0n1 : 0.41 1576.71 98.54 157.67 0.00 35842.47 2815.62 34564.17 00:10:52.390 =================================================================================================================== 00:10:52.390 Total : 1576.71 98.54 157.67 0.00 35842.47 2815.62 34564.17 00:10:52.390 [2024-07-15 10:27:40.712919] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.390 [2024-07-15 10:27:40.712953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179d790 (9): Bad file descriptor 00:10:52.390 10:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.390 10:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:52.390 [2024-07-15 10:27:40.765199] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1156776 00:10:53.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1156776) - No such process 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.323 { 00:10:53.323 "params": { 00:10:53.323 "name": "Nvme$subsystem", 00:10:53.323 "trtype": "$TEST_TRANSPORT", 00:10:53.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.323 "adrfam": "ipv4", 00:10:53.323 "trsvcid": "$NVMF_PORT", 00:10:53.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.323 "hdgst": ${hdgst:-false}, 00:10:53.323 "ddgst": ${ddgst:-false} 00:10:53.323 }, 00:10:53.323 "method": "bdev_nvme_attach_controller" 00:10:53.323 } 00:10:53.323 EOF 00:10:53.323 )") 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:53.323 10:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.323 "params": { 00:10:53.323 "name": "Nvme0", 00:10:53.323 "trtype": "tcp", 00:10:53.323 "traddr": "10.0.0.2", 00:10:53.323 "adrfam": "ipv4", 00:10:53.323 "trsvcid": "4420", 00:10:53.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:53.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:53.323 "hdgst": false, 00:10:53.323 "ddgst": false 00:10:53.323 }, 00:10:53.323 "method": "bdev_nvme_attach_controller" 00:10:53.323 }' 00:10:53.323 [2024-07-15 10:27:41.769473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:53.323 [2024-07-15 10:27:41.769558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157049 ] 00:10:53.323 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.323 [2024-07-15 10:27:41.831654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.580 [2024-07-15 10:27:41.943994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.837 Running I/O for 1 seconds... 00:10:54.770 00:10:54.770 Latency(us) 00:10:54.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.770 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:54.770 Verification LBA range: start 0x0 length 0x400 00:10:54.770 Nvme0n1 : 1.03 1742.99 108.94 0.00 0.00 36116.26 5971.06 32816.55 00:10:54.770 =================================================================================================================== 00:10:54.770 Total : 1742.99 108.94 0.00 0.00 36116.26 5971.06 32816.55 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.027 rmmod nvme_tcp 00:10:55.027 rmmod nvme_fabrics 00:10:55.027 rmmod nvme_keyring 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1156729 ']' 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1156729 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1156729 ']' 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1156729 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1156729 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1156729' 00:10:55.027 killing process with pid 1156729 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1156729 00:10:55.027 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1156729 00:10:55.591 [2024-07-15 10:27:43.834756] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.591 10:27:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.525 10:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.525 10:27:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:57.525 00:10:57.525 real 0m8.788s 00:10:57.525 user 0m19.788s 00:10:57.525 sys 0m2.636s 00:10:57.525 10:27:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.525 10:27:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.525 ************************************ 00:10:57.525 END TEST nvmf_host_management 00:10:57.525 ************************************ 00:10:57.525 10:27:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:57.525 10:27:45 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:57.525 10:27:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:57.525 10:27:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.525 10:27:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.525 ************************************ 00:10:57.525 START TEST nvmf_lvol 00:10:57.525 ************************************ 00:10:57.525 10:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:57.525 * Looking for test storage... 00:10:57.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.525 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.526 10:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:00.052 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:00.052 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:00.052 Found net devices under 0000:09:00.0: cvl_0_0 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:00.052 Found net devices under 0000:09:00.1: cvl_0_1 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.052 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:11:00.053 00:11:00.053 --- 10.0.0.2 ping statistics --- 00:11:00.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.053 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:00.053 00:11:00.053 --- 10.0.0.1 ping statistics --- 00:11:00.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.053 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1159247 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1159247 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1159247 ']' 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:00.053 [2024-07-15 10:27:48.303655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:00.053 [2024-07-15 10:27:48.303729] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.053 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.053 [2024-07-15 10:27:48.365932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.053 [2024-07-15 10:27:48.476249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.053 [2024-07-15 10:27:48.476304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.053 [2024-07-15 10:27:48.476331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.053 [2024-07-15 10:27:48.476343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.053 [2024-07-15 10:27:48.476352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.053 [2024-07-15 10:27:48.476447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.053 [2024-07-15 10:27:48.476522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.053 [2024-07-15 10:27:48.476519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.053 10:27:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:00.310 10:27:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.310 10:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:00.310 [2024-07-15 10:27:48.843365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.567 10:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.824 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:00.824 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.088 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:01.088 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:01.405 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:01.680 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=09910f87-5ef8-4b59-b66e-c1efb6fac3c0 00:11:01.680 10:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09910f87-5ef8-4b59-b66e-c1efb6fac3c0 lvol 20 00:11:01.680 10:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f02e2c33-cebb-4046-b857-6d4c23101afb 00:11:01.680 10:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:01.938 10:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f02e2c33-cebb-4046-b857-6d4c23101afb 00:11:02.195 10:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:02.454 [2024-07-15 10:27:50.941279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.454 10:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:02.711 10:27:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1159559 00:11:02.711 10:27:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:02.711 10:27:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:02.711 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.081 10:27:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f02e2c33-cebb-4046-b857-6d4c23101afb MY_SNAPSHOT 00:11:04.082 10:27:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5f91464f-7fb0-4ef0-9d39-e3ce2d877a0a 00:11:04.082 10:27:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f02e2c33-cebb-4046-b857-6d4c23101afb 30 00:11:04.339 10:27:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5f91464f-7fb0-4ef0-9d39-e3ce2d877a0a MY_CLONE 00:11:04.637 10:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4997bbd9-fa64-47d3-b3d8-bffc7054f877 00:11:04.637 10:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4997bbd9-fa64-47d3-b3d8-bffc7054f877 00:11:05.200 10:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1159559 00:11:13.300 Initializing NVMe Controllers 00:11:13.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:13.300 Controller IO queue size 128, less than required. 00:11:13.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:13.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:13.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:13.300 Initialization complete. Launching workers. 00:11:13.300 ======================================================== 00:11:13.300 Latency(us) 00:11:13.300 Device Information : IOPS MiB/s Average min max 00:11:13.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10648.70 41.60 12020.85 687.33 65242.27 00:11:13.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10586.50 41.35 12098.15 2066.15 86192.98 00:11:13.300 ======================================================== 00:11:13.300 Total : 21235.20 82.95 12059.39 687.33 86192.98 00:11:13.300 00:11:13.300 10:28:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:13.558 10:28:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f02e2c33-cebb-4046-b857-6d4c23101afb 00:11:13.816 10:28:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09910f87-5ef8-4b59-b66e-c1efb6fac3c0 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.074 rmmod nvme_tcp 00:11:14.074 rmmod nvme_fabrics 00:11:14.074 rmmod nvme_keyring 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1159247 ']' 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1159247 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1159247 ']' 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1159247 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1159247 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1159247' 00:11:14.074 killing process with pid 1159247 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1159247 00:11:14.074 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1159247 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.641 10:28:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.548 10:28:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.548 00:11:16.548 real 0m19.020s 00:11:16.548 user 1m4.407s 00:11:16.548 sys 0m5.768s 00:11:16.548 10:28:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.548 10:28:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:16.548 ************************************ 00:11:16.548 END TEST nvmf_lvol 00:11:16.548 ************************************ 00:11:16.548 10:28:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:16.548 10:28:04 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:16.548 10:28:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:16.548 10:28:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.548 10:28:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.548 ************************************ 00:11:16.548 START TEST nvmf_lvs_grow 00:11:16.548 ************************************ 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:16.548 * Looking for test storage... 00:11:16.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.548 10:28:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:19.080 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:19.080 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.080 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:19.081 Found net devices under 0000:09:00.0: cvl_0_0 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:19.081 Found net devices under 0000:09:00.1: cvl_0_1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:11:19.081 00:11:19.081 --- 10.0.0.2 ping statistics --- 00:11:19.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.081 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:19.081 00:11:19.081 --- 10.0.0.1 ping statistics --- 00:11:19.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.081 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1162936 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1162936 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1162936 ']' 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.081 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:19.081 [2024-07-15 10:28:07.362131] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:19.081 [2024-07-15 10:28:07.362220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.081 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.081 [2024-07-15 10:28:07.426465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.081 [2024-07-15 10:28:07.536303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.081 [2024-07-15 10:28:07.536370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.081 [2024-07-15 10:28:07.536384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.081 [2024-07-15 10:28:07.536394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.081 [2024-07-15 10:28:07.536404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.081 [2024-07-15 10:28:07.536444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.337 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.337 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:19.337 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.337 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.337 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:19.338 10:28:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.338 10:28:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.595 [2024-07-15 10:28:07.942489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:19.595 ************************************ 00:11:19.595 START TEST lvs_grow_clean 00:11:19.595 ************************************ 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:19.595 10:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:19.595 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:19.852 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:19.852 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:20.109 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=235a2703-d071-4b86-bb34-eb391830b0a2 00:11:20.109 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:20.109 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:20.402 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:20.402 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:20.402 10:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 235a2703-d071-4b86-bb34-eb391830b0a2 lvol 150 00:11:20.658 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a8832443-4445-4b20-971a-aea26a80743c 00:11:20.659 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:20.659 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:20.916 [2024-07-15 10:28:09.302040] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:20.916 [2024-07-15 10:28:09.302127] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:20.916 true 00:11:20.916 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:20.916 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:21.173 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:21.173 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:21.431 10:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8832443-4445-4b20-971a-aea26a80743c 00:11:21.688 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:21.945 [2024-07-15 10:28:10.289077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.945 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1163346 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1163346 /var/tmp/bdevperf.sock 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1163346 ']' 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.203 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.204 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.204 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:22.204 [2024-07-15 10:28:10.584968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:22.204 [2024-07-15 10:28:10.585058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163346 ] 00:11:22.204 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.204 [2024-07-15 10:28:10.643425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.204 [2024-07-15 10:28:10.751607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.460 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.460 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:22.460 10:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:22.717 Nvme0n1 00:11:22.717 10:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:22.974 [ 00:11:22.974 { 00:11:22.974 "name": "Nvme0n1", 00:11:22.974 "aliases": [ 00:11:22.974 "a8832443-4445-4b20-971a-aea26a80743c" 00:11:22.974 ], 00:11:22.974 "product_name": "NVMe disk", 00:11:22.974 "block_size": 4096, 00:11:22.974 "num_blocks": 38912, 00:11:22.974 "uuid": "a8832443-4445-4b20-971a-aea26a80743c", 00:11:22.974 "assigned_rate_limits": { 00:11:22.974 "rw_ios_per_sec": 0, 00:11:22.974 "rw_mbytes_per_sec": 0, 00:11:22.974 "r_mbytes_per_sec": 0, 00:11:22.974 "w_mbytes_per_sec": 0 00:11:22.974 }, 00:11:22.974 "claimed": false, 00:11:22.974 "zoned": false, 00:11:22.974 "supported_io_types": { 00:11:22.974 "read": true, 00:11:22.974 "write": true, 00:11:22.974 "unmap": true, 00:11:22.974 "flush": true, 00:11:22.974 "reset": true, 00:11:22.974 "nvme_admin": true, 00:11:22.974 "nvme_io": true, 00:11:22.974 "nvme_io_md": false, 00:11:22.974 "write_zeroes": true, 00:11:22.974 "zcopy": false, 00:11:22.974 "get_zone_info": false, 00:11:22.974 "zone_management": false, 00:11:22.974 "zone_append": false, 00:11:22.974 "compare": true, 00:11:22.974 "compare_and_write": true, 00:11:22.974 "abort": true, 00:11:22.974 "seek_hole": false, 00:11:22.974 "seek_data": false, 00:11:22.974 "copy": true, 00:11:22.974 "nvme_iov_md": false 00:11:22.974 }, 00:11:22.974 "memory_domains": [ 00:11:22.974 { 00:11:22.974 "dma_device_id": "system", 00:11:22.974 "dma_device_type": 1 00:11:22.974 } 00:11:22.974 ], 00:11:22.974 "driver_specific": { 00:11:22.974 "nvme": [ 00:11:22.974 { 00:11:22.974 "trid": { 00:11:22.974 "trtype": "TCP", 00:11:22.974 "adrfam": "IPv4", 00:11:22.974 "traddr": "10.0.0.2", 00:11:22.974 "trsvcid": "4420", 00:11:22.974 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:22.974 }, 00:11:22.974 "ctrlr_data": { 00:11:22.974 "cntlid": 1, 00:11:22.974 "vendor_id": "0x8086", 00:11:22.974 "model_number": "SPDK bdev Controller", 00:11:22.974 "serial_number": "SPDK0", 00:11:22.974 "firmware_revision": "24.09", 00:11:22.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:22.974 "oacs": { 00:11:22.974 "security": 0, 00:11:22.974 "format": 0, 00:11:22.974 "firmware": 0, 00:11:22.974 "ns_manage": 0 00:11:22.974 }, 00:11:22.974 "multi_ctrlr": true, 00:11:22.974 "ana_reporting": false 00:11:22.974 }, 00:11:22.974 "vs": { 00:11:22.974 "nvme_version": "1.3" 00:11:22.974 }, 00:11:22.974 "ns_data": { 00:11:22.974 "id": 1, 00:11:22.974 "can_share": true 00:11:22.974 } 00:11:22.974 } 00:11:22.974 ], 00:11:22.974 "mp_policy": "active_passive" 00:11:22.974 } 00:11:22.974 } 00:11:22.974 ] 00:11:22.974 10:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1163388 00:11:22.974 10:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:22.974 10:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:23.231 Running I/O for 10 seconds... 00:11:24.163 Latency(us) 00:11:24.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.163 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:11:24.163 =================================================================================================================== 00:11:24.163 Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:11:24.163 00:11:25.098 10:28:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:25.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.098 Nvme0n1 : 2.00 15439.50 60.31 0.00 0.00 0.00 0.00 0.00 00:11:25.098 =================================================================================================================== 00:11:25.098 Total : 15439.50 60.31 0.00 0.00 0.00 0.00 0.00 00:11:25.098 00:11:25.356 true 00:11:25.356 10:28:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:25.356 10:28:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:25.612 10:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:25.613 10:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:25.613 10:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1163388 00:11:26.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.176 Nvme0n1 : 3.00 15543.33 60.72 0.00 0.00 0.00 0.00 0.00 00:11:26.176 =================================================================================================================== 00:11:26.176 Total : 15543.33 60.72 0.00 0.00 0.00 0.00 0.00 00:11:26.176 00:11:27.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.139 Nvme0n1 : 4.00 15634.75 61.07 0.00 0.00 0.00 0.00 0.00 00:11:27.139 =================================================================================================================== 00:11:27.139 Total : 15634.75 61.07 0.00 0.00 0.00 0.00 0.00 00:11:27.139 00:11:28.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.072 Nvme0n1 : 5.00 15733.60 61.46 0.00 0.00 0.00 0.00 0.00 00:11:28.072 =================================================================================================================== 00:11:28.072 Total : 15733.60 61.46 0.00 0.00 0.00 0.00 0.00 00:11:28.072 00:11:29.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.445 Nvme0n1 : 6.00 15795.67 61.70 0.00 0.00 0.00 0.00 0.00 00:11:29.445 =================================================================================================================== 00:11:29.445 Total : 15795.67 61.70 0.00 0.00 0.00 0.00 0.00 00:11:29.445 00:11:30.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.379 Nvme0n1 : 7.00 15829.71 61.83 0.00 0.00 0.00 0.00 0.00 00:11:30.379 =================================================================================================================== 00:11:30.379 Total : 15829.71 61.83 0.00 0.00 0.00 0.00 0.00 00:11:30.379 00:11:31.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.340 Nvme0n1 : 8.00 15859.38 61.95 0.00 0.00 0.00 0.00 0.00 00:11:31.340 =================================================================================================================== 00:11:31.340 Total : 15859.38 61.95 0.00 0.00 0.00 0.00 0.00 00:11:31.340 00:11:32.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.272 Nvme0n1 : 9.00 15889.33 62.07 0.00 0.00 0.00 0.00 0.00 00:11:32.272 =================================================================================================================== 00:11:32.272 Total : 15889.33 62.07 0.00 0.00 0.00 0.00 0.00 00:11:32.272 00:11:33.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.205 Nvme0n1 : 10.00 15916.90 62.18 0.00 0.00 0.00 0.00 0.00 00:11:33.205 =================================================================================================================== 00:11:33.205 Total : 15916.90 62.18 0.00 0.00 0.00 0.00 0.00 00:11:33.205 00:11:33.205 00:11:33.205 Latency(us) 00:11:33.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.205 Nvme0n1 : 10.00 15914.36 62.17 0.00 0.00 8037.40 4296.25 17476.27 00:11:33.205 =================================================================================================================== 00:11:33.205 Total : 15914.36 62.17 0.00 0.00 8037.40 4296.25 17476.27 00:11:33.205 0 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1163346 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1163346 ']' 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1163346 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163346 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163346' 00:11:33.205 killing process with pid 1163346 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1163346 00:11:33.205 Received shutdown signal, test time was about 10.000000 seconds 00:11:33.205 00:11:33.205 Latency(us) 00:11:33.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.205 =================================================================================================================== 00:11:33.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:33.205 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1163346 00:11:33.463 10:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.720 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:33.977 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:33.977 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:34.234 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:34.234 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:34.234 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:34.492 [2024-07-15 10:28:22.889494] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.492 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.493 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:34.493 10:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:34.750 request: 00:11:34.750 { 00:11:34.750 "uuid": "235a2703-d071-4b86-bb34-eb391830b0a2", 00:11:34.750 "method": "bdev_lvol_get_lvstores", 00:11:34.750 "req_id": 1 00:11:34.750 } 00:11:34.750 Got JSON-RPC error response 00:11:34.750 response: 00:11:34.750 { 00:11:34.750 "code": -19, 00:11:34.750 "message": "No such device" 00:11:34.750 } 00:11:34.750 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:34.750 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:34.750 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:34.750 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:34.750 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:35.007 aio_bdev 00:11:35.007 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a8832443-4445-4b20-971a-aea26a80743c 00:11:35.007 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=a8832443-4445-4b20-971a-aea26a80743c 00:11:35.007 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:35.008 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:35.008 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:35.008 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:35.008 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:35.264 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a8832443-4445-4b20-971a-aea26a80743c -t 2000 00:11:35.522 [ 00:11:35.522 { 00:11:35.522 "name": "a8832443-4445-4b20-971a-aea26a80743c", 00:11:35.522 "aliases": [ 00:11:35.522 "lvs/lvol" 00:11:35.522 ], 00:11:35.522 "product_name": "Logical Volume", 00:11:35.522 "block_size": 4096, 00:11:35.522 "num_blocks": 38912, 00:11:35.522 "uuid": "a8832443-4445-4b20-971a-aea26a80743c", 00:11:35.522 "assigned_rate_limits": { 00:11:35.522 "rw_ios_per_sec": 0, 00:11:35.522 "rw_mbytes_per_sec": 0, 00:11:35.522 "r_mbytes_per_sec": 0, 00:11:35.522 "w_mbytes_per_sec": 0 00:11:35.522 }, 00:11:35.522 "claimed": false, 00:11:35.522 "zoned": false, 00:11:35.522 "supported_io_types": { 00:11:35.522 "read": true, 00:11:35.522 "write": true, 00:11:35.522 "unmap": true, 00:11:35.522 "flush": false, 00:11:35.522 "reset": true, 00:11:35.522 "nvme_admin": false, 00:11:35.522 "nvme_io": false, 00:11:35.522 "nvme_io_md": false, 00:11:35.522 "write_zeroes": true, 00:11:35.522 "zcopy": false, 00:11:35.522 "get_zone_info": false, 00:11:35.522 "zone_management": false, 00:11:35.522 "zone_append": false, 00:11:35.522 "compare": false, 00:11:35.522 "compare_and_write": false, 00:11:35.522 "abort": false, 00:11:35.522 "seek_hole": true, 00:11:35.522 "seek_data": true, 00:11:35.522 "copy": false, 00:11:35.522 "nvme_iov_md": false 00:11:35.522 }, 00:11:35.522 "driver_specific": { 00:11:35.522 "lvol": { 00:11:35.522 "lvol_store_uuid": "235a2703-d071-4b86-bb34-eb391830b0a2", 00:11:35.522 "base_bdev": "aio_bdev", 00:11:35.522 "thin_provision": false, 00:11:35.522 "num_allocated_clusters": 38, 00:11:35.522 "snapshot": false, 00:11:35.522 "clone": false, 00:11:35.522 "esnap_clone": false 00:11:35.522 } 00:11:35.522 } 00:11:35.522 } 00:11:35.522 ] 00:11:35.522 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:35.522 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:35.522 10:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:35.780 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:35.780 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:35.780 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:36.036 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:36.036 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8832443-4445-4b20-971a-aea26a80743c 00:11:36.294 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 235a2703-d071-4b86-bb34-eb391830b0a2 00:11:36.552 10:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:36.809 00:11:36.809 real 0m17.309s 00:11:36.809 user 0m16.790s 00:11:36.809 sys 0m1.876s 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:36.809 ************************************ 00:11:36.809 END TEST lvs_grow_clean 00:11:36.809 ************************************ 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:36.809 ************************************ 00:11:36.809 START TEST lvs_grow_dirty 00:11:36.809 ************************************ 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:36.809 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:37.375 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:37.375 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:37.375 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:37.375 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:37.375 10:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:37.632 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:37.632 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:37.632 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e841a96-eac0-4e31-8e63-618efc1749e2 lvol 150 00:11:37.890 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:37.890 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:37.890 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:38.148 [2024-07-15 10:28:26.626982] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:38.148 [2024-07-15 10:28:26.627069] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:38.148 true 00:11:38.148 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:38.148 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:38.405 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:38.405 10:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:38.662 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:38.920 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:39.178 [2024-07-15 10:28:27.654045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.178 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1165433 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1165433 /var/tmp/bdevperf.sock 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1165433 ']' 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.436 10:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.436 [2024-07-15 10:28:27.946072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:39.436 [2024-07-15 10:28:27.946169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165433 ] 00:11:39.436 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.694 [2024-07-15 10:28:28.005194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.694 [2024-07-15 10:28:28.119290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.694 10:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.694 10:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:39.694 10:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:40.258 Nvme0n1 00:11:40.258 10:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:40.516 [ 00:11:40.516 { 00:11:40.516 "name": "Nvme0n1", 00:11:40.516 "aliases": [ 00:11:40.516 "c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a" 00:11:40.516 ], 00:11:40.516 "product_name": "NVMe disk", 00:11:40.516 "block_size": 4096, 00:11:40.516 "num_blocks": 38912, 00:11:40.516 "uuid": "c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a", 00:11:40.516 "assigned_rate_limits": { 00:11:40.516 "rw_ios_per_sec": 0, 00:11:40.516 "rw_mbytes_per_sec": 0, 00:11:40.516 "r_mbytes_per_sec": 0, 00:11:40.516 "w_mbytes_per_sec": 0 00:11:40.516 }, 00:11:40.516 "claimed": false, 00:11:40.516 "zoned": false, 00:11:40.516 "supported_io_types": { 00:11:40.516 "read": true, 00:11:40.516 "write": true, 00:11:40.516 "unmap": true, 00:11:40.516 "flush": true, 00:11:40.516 "reset": true, 00:11:40.516 "nvme_admin": true, 00:11:40.516 "nvme_io": true, 00:11:40.516 "nvme_io_md": false, 00:11:40.516 "write_zeroes": true, 00:11:40.516 "zcopy": false, 00:11:40.516 "get_zone_info": false, 00:11:40.516 "zone_management": false, 00:11:40.516 "zone_append": false, 00:11:40.516 "compare": true, 00:11:40.516 "compare_and_write": true, 00:11:40.516 "abort": true, 00:11:40.516 "seek_hole": false, 00:11:40.516 "seek_data": false, 00:11:40.516 "copy": true, 00:11:40.516 "nvme_iov_md": false 00:11:40.516 }, 00:11:40.516 "memory_domains": [ 00:11:40.516 { 00:11:40.516 "dma_device_id": "system", 00:11:40.516 "dma_device_type": 1 00:11:40.516 } 00:11:40.516 ], 00:11:40.516 "driver_specific": { 00:11:40.516 "nvme": [ 00:11:40.516 { 00:11:40.516 "trid": { 00:11:40.516 "trtype": "TCP", 00:11:40.516 "adrfam": "IPv4", 00:11:40.516 "traddr": "10.0.0.2", 00:11:40.516 "trsvcid": "4420", 00:11:40.516 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:40.516 }, 00:11:40.516 "ctrlr_data": { 00:11:40.516 "cntlid": 1, 00:11:40.516 "vendor_id": "0x8086", 00:11:40.516 "model_number": "SPDK bdev Controller", 00:11:40.516 "serial_number": "SPDK0", 00:11:40.516 "firmware_revision": "24.09", 00:11:40.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:40.516 "oacs": { 00:11:40.516 "security": 0, 00:11:40.516 "format": 0, 00:11:40.516 "firmware": 0, 00:11:40.516 "ns_manage": 0 00:11:40.516 }, 00:11:40.516 "multi_ctrlr": true, 00:11:40.516 "ana_reporting": false 00:11:40.516 }, 00:11:40.516 "vs": { 00:11:40.516 "nvme_version": "1.3" 00:11:40.516 }, 00:11:40.517 "ns_data": { 00:11:40.517 "id": 1, 00:11:40.517 "can_share": true 00:11:40.517 } 00:11:40.517 } 00:11:40.517 ], 00:11:40.517 "mp_policy": "active_passive" 00:11:40.517 } 00:11:40.517 } 00:11:40.517 ] 00:11:40.517 10:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1165569 00:11:40.517 10:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:40.517 10:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:40.775 Running I/O for 10 seconds... 00:11:41.708 Latency(us) 00:11:41.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.708 Nvme0n1 : 1.00 15439.00 60.31 0.00 0.00 0.00 0.00 0.00 00:11:41.708 =================================================================================================================== 00:11:41.708 Total : 15439.00 60.31 0.00 0.00 0.00 0.00 0.00 00:11:41.708 00:11:42.640 10:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:42.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.640 Nvme0n1 : 2.00 15593.50 60.91 0.00 0.00 0.00 0.00 0.00 00:11:42.640 =================================================================================================================== 00:11:42.640 Total : 15593.50 60.91 0.00 0.00 0.00 0.00 0.00 00:11:42.640 00:11:42.897 true 00:11:42.897 10:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:42.897 10:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:43.155 10:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:43.155 10:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:43.155 10:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1165569 00:11:43.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.718 Nvme0n1 : 3.00 15687.33 61.28 0.00 0.00 0.00 0.00 0.00 00:11:43.718 =================================================================================================================== 00:11:43.718 Total : 15687.33 61.28 0.00 0.00 0.00 0.00 0.00 00:11:43.718 00:11:44.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.648 Nvme0n1 : 4.00 15797.75 61.71 0.00 0.00 0.00 0.00 0.00 00:11:44.648 =================================================================================================================== 00:11:44.648 Total : 15797.75 61.71 0.00 0.00 0.00 0.00 0.00 00:11:44.649 00:11:45.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.580 Nvme0n1 : 5.00 15838.60 61.87 0.00 0.00 0.00 0.00 0.00 00:11:45.580 =================================================================================================================== 00:11:45.580 Total : 15838.60 61.87 0.00 0.00 0.00 0.00 0.00 00:11:45.580 00:11:46.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.952 Nvme0n1 : 6.00 15892.67 62.08 0.00 0.00 0.00 0.00 0.00 00:11:46.952 =================================================================================================================== 00:11:46.952 Total : 15892.67 62.08 0.00 0.00 0.00 0.00 0.00 00:11:46.952 00:11:47.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.882 Nvme0n1 : 7.00 15953.86 62.32 0.00 0.00 0.00 0.00 0.00 00:11:47.882 =================================================================================================================== 00:11:47.882 Total : 15953.86 62.32 0.00 0.00 0.00 0.00 0.00 00:11:47.882 00:11:48.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.813 Nvme0n1 : 8.00 16007.75 62.53 0.00 0.00 0.00 0.00 0.00 00:11:48.813 =================================================================================================================== 00:11:48.813 Total : 16007.75 62.53 0.00 0.00 0.00 0.00 0.00 00:11:48.813 00:11:49.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.745 Nvme0n1 : 9.00 16042.67 62.67 0.00 0.00 0.00 0.00 0.00 00:11:49.745 =================================================================================================================== 00:11:49.745 Total : 16042.67 62.67 0.00 0.00 0.00 0.00 0.00 00:11:49.745 00:11:50.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.678 Nvme0n1 : 10.00 16076.70 62.80 0.00 0.00 0.00 0.00 0.00 00:11:50.678 =================================================================================================================== 00:11:50.678 Total : 16076.70 62.80 0.00 0.00 0.00 0.00 0.00 00:11:50.678 00:11:50.678 00:11:50.678 Latency(us) 00:11:50.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.678 Nvme0n1 : 10.01 16080.47 62.81 0.00 0.00 7955.29 3373.89 18641.35 00:11:50.678 =================================================================================================================== 00:11:50.678 Total : 16080.47 62.81 0.00 0.00 7955.29 3373.89 18641.35 00:11:50.678 0 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1165433 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1165433 ']' 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1165433 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1165433 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1165433' 00:11:50.678 killing process with pid 1165433 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1165433 00:11:50.678 Received shutdown signal, test time was about 10.000000 seconds 00:11:50.678 00:11:50.678 Latency(us) 00:11:50.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.678 =================================================================================================================== 00:11:50.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:50.678 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1165433 00:11:50.936 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.194 10:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1162936 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1162936 00:11:51.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1162936 Killed "${NVMF_APP[@]}" "$@" 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.761 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1166894 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1166894 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1166894 ']' 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.019 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:52.019 [2024-07-15 10:28:40.362392] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:52.019 [2024-07-15 10:28:40.362479] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.019 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.019 [2024-07-15 10:28:40.436826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.019 [2024-07-15 10:28:40.548838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.019 [2024-07-15 10:28:40.548896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.019 [2024-07-15 10:28:40.548924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.019 [2024-07-15 10:28:40.548936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.019 [2024-07-15 10:28:40.548945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.019 [2024-07-15 10:28:40.548978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.277 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:52.535 [2024-07-15 10:28:40.903126] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:52.535 [2024-07-15 10:28:40.903256] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:52.535 [2024-07-15 10:28:40.903302] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:52.535 10:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:52.792 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a -t 2000 00:11:53.049 [ 00:11:53.049 { 00:11:53.049 "name": "c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a", 00:11:53.049 "aliases": [ 00:11:53.049 "lvs/lvol" 00:11:53.049 ], 00:11:53.049 "product_name": "Logical Volume", 00:11:53.049 "block_size": 4096, 00:11:53.049 "num_blocks": 38912, 00:11:53.049 "uuid": "c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a", 00:11:53.049 "assigned_rate_limits": { 00:11:53.049 "rw_ios_per_sec": 0, 00:11:53.049 "rw_mbytes_per_sec": 0, 00:11:53.049 "r_mbytes_per_sec": 0, 00:11:53.049 "w_mbytes_per_sec": 0 00:11:53.049 }, 00:11:53.049 "claimed": false, 00:11:53.049 "zoned": false, 00:11:53.049 "supported_io_types": { 00:11:53.049 "read": true, 00:11:53.049 "write": true, 00:11:53.049 "unmap": true, 00:11:53.049 "flush": false, 00:11:53.049 "reset": true, 00:11:53.049 "nvme_admin": false, 00:11:53.049 "nvme_io": false, 00:11:53.049 "nvme_io_md": false, 00:11:53.049 "write_zeroes": true, 00:11:53.049 "zcopy": false, 00:11:53.049 "get_zone_info": false, 00:11:53.049 "zone_management": false, 00:11:53.049 "zone_append": false, 00:11:53.049 "compare": false, 00:11:53.049 "compare_and_write": false, 00:11:53.049 "abort": false, 00:11:53.049 "seek_hole": true, 00:11:53.049 "seek_data": true, 00:11:53.049 "copy": false, 00:11:53.049 "nvme_iov_md": false 00:11:53.049 }, 00:11:53.049 "driver_specific": { 00:11:53.049 "lvol": { 00:11:53.049 "lvol_store_uuid": "7e841a96-eac0-4e31-8e63-618efc1749e2", 00:11:53.049 "base_bdev": "aio_bdev", 00:11:53.049 "thin_provision": false, 00:11:53.049 "num_allocated_clusters": 38, 00:11:53.049 "snapshot": false, 00:11:53.049 "clone": false, 00:11:53.049 "esnap_clone": false 00:11:53.049 } 00:11:53.049 } 00:11:53.049 } 00:11:53.049 ] 00:11:53.049 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:53.049 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:53.049 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:53.306 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:53.306 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:53.306 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:53.563 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:53.564 10:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:53.856 [2024-07-15 10:28:42.128360] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:53.856 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:54.135 request: 00:11:54.135 { 00:11:54.135 "uuid": "7e841a96-eac0-4e31-8e63-618efc1749e2", 00:11:54.135 "method": "bdev_lvol_get_lvstores", 00:11:54.135 "req_id": 1 00:11:54.135 } 00:11:54.135 Got JSON-RPC error response 00:11:54.135 response: 00:11:54.135 { 00:11:54.135 "code": -19, 00:11:54.135 "message": "No such device" 00:11:54.135 } 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:54.135 aio_bdev 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:54.135 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:54.393 10:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a -t 2000 00:11:54.650 [ 00:11:54.650 { 00:11:54.650 "name": "c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a", 00:11:54.650 "aliases": [ 00:11:54.650 "lvs/lvol" 00:11:54.650 ], 00:11:54.650 "product_name": "Logical Volume", 00:11:54.650 "block_size": 4096, 00:11:54.650 "num_blocks": 38912, 00:11:54.650 "uuid": "c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a", 00:11:54.650 "assigned_rate_limits": { 00:11:54.650 "rw_ios_per_sec": 0, 00:11:54.650 "rw_mbytes_per_sec": 0, 00:11:54.650 "r_mbytes_per_sec": 0, 00:11:54.650 "w_mbytes_per_sec": 0 00:11:54.650 }, 00:11:54.650 "claimed": false, 00:11:54.650 "zoned": false, 00:11:54.650 "supported_io_types": { 00:11:54.650 "read": true, 00:11:54.650 "write": true, 00:11:54.650 "unmap": true, 00:11:54.650 "flush": false, 00:11:54.650 "reset": true, 00:11:54.650 "nvme_admin": false, 00:11:54.650 "nvme_io": false, 00:11:54.650 "nvme_io_md": false, 00:11:54.650 "write_zeroes": true, 00:11:54.650 "zcopy": false, 00:11:54.650 "get_zone_info": false, 00:11:54.650 "zone_management": false, 00:11:54.650 "zone_append": false, 00:11:54.650 "compare": false, 00:11:54.650 "compare_and_write": false, 00:11:54.650 "abort": false, 00:11:54.650 "seek_hole": true, 00:11:54.650 "seek_data": true, 00:11:54.650 "copy": false, 00:11:54.650 "nvme_iov_md": false 00:11:54.650 }, 00:11:54.650 "driver_specific": { 00:11:54.650 "lvol": { 00:11:54.650 "lvol_store_uuid": "7e841a96-eac0-4e31-8e63-618efc1749e2", 00:11:54.650 "base_bdev": "aio_bdev", 00:11:54.650 "thin_provision": false, 00:11:54.650 "num_allocated_clusters": 38, 00:11:54.650 "snapshot": false, 00:11:54.650 "clone": false, 00:11:54.650 "esnap_clone": false 00:11:54.650 } 00:11:54.650 } 00:11:54.650 } 00:11:54.650 ] 00:11:54.650 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:54.650 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:54.651 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:54.907 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:54.907 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:54.907 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:55.164 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:55.164 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c1c80cf5-9c9f-46d4-a91a-3cff6d2b1b4a 00:11:55.421 10:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e841a96-eac0-4e31-8e63-618efc1749e2 00:11:55.679 10:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:55.936 00:11:55.936 real 0m19.038s 00:11:55.936 user 0m48.240s 00:11:55.936 sys 0m4.643s 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:55.936 ************************************ 00:11:55.936 END TEST lvs_grow_dirty 00:11:55.936 ************************************ 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:55.936 nvmf_trace.0 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.936 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.936 rmmod nvme_tcp 00:11:55.936 rmmod nvme_fabrics 00:11:55.936 rmmod nvme_keyring 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1166894 ']' 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1166894 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1166894 ']' 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1166894 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166894 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166894' 00:11:56.194 killing process with pid 1166894 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1166894 00:11:56.194 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1166894 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.452 10:28:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.359 10:28:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.359 00:11:58.359 real 0m41.818s 00:11:58.359 user 1m10.648s 00:11:58.359 sys 0m8.458s 00:11:58.359 10:28:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.359 10:28:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:58.359 ************************************ 00:11:58.359 END TEST nvmf_lvs_grow 00:11:58.359 ************************************ 00:11:58.359 10:28:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:58.359 10:28:46 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:58.359 10:28:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:58.359 10:28:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.359 10:28:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.359 ************************************ 00:11:58.359 START TEST nvmf_bdev_io_wait 00:11:58.359 ************************************ 00:11:58.359 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:58.617 * Looking for test storage... 00:11:58.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.617 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.618 10:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:01.151 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:01.152 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:01.152 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:01.152 Found net devices under 0000:09:00.0: cvl_0_0 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:01.152 Found net devices under 0000:09:00.1: cvl_0_1 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:01.152 00:12:01.152 --- 10.0.0.2 ping statistics --- 00:12:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.152 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:12:01.152 00:12:01.152 --- 10.0.0.1 ping statistics --- 00:12:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.152 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1169421 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1169421 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1169421 ']' 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.152 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 [2024-07-15 10:28:49.344089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.152 [2024-07-15 10:28:49.344173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.152 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.152 [2024-07-15 10:28:49.406409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.152 [2024-07-15 10:28:49.513036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.152 [2024-07-15 10:28:49.513103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.152 [2024-07-15 10:28:49.513116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.152 [2024-07-15 10:28:49.513127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.152 [2024-07-15 10:28:49.513150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.152 [2024-07-15 10:28:49.513255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.152 [2024-07-15 10:28:49.513314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.152 [2024-07-15 10:28:49.513389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.153 [2024-07-15 10:28:49.513392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 [2024-07-15 10:28:49.629456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 Malloc0 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 [2024-07-15 10:28:49.692042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1169451 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1169453 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:01.153 { 00:12:01.153 "params": { 00:12:01.153 "name": "Nvme$subsystem", 00:12:01.153 "trtype": "$TEST_TRANSPORT", 00:12:01.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:01.153 "adrfam": "ipv4", 00:12:01.153 "trsvcid": "$NVMF_PORT", 00:12:01.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:01.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:01.153 "hdgst": ${hdgst:-false}, 00:12:01.153 "ddgst": ${ddgst:-false} 00:12:01.153 }, 00:12:01.153 "method": "bdev_nvme_attach_controller" 00:12:01.153 } 00:12:01.153 EOF 00:12:01.153 )") 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1169455 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:01.153 { 00:12:01.153 "params": { 00:12:01.153 "name": "Nvme$subsystem", 00:12:01.153 "trtype": "$TEST_TRANSPORT", 00:12:01.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:01.153 "adrfam": "ipv4", 00:12:01.153 "trsvcid": "$NVMF_PORT", 00:12:01.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:01.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:01.153 "hdgst": ${hdgst:-false}, 00:12:01.153 "ddgst": ${ddgst:-false} 00:12:01.153 }, 00:12:01.153 "method": "bdev_nvme_attach_controller" 00:12:01.153 } 00:12:01.153 EOF 00:12:01.153 )") 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1169458 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:01.153 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:01.411 { 00:12:01.411 "params": { 00:12:01.411 "name": "Nvme$subsystem", 00:12:01.411 "trtype": "$TEST_TRANSPORT", 00:12:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:01.411 "adrfam": "ipv4", 00:12:01.411 "trsvcid": "$NVMF_PORT", 00:12:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:01.411 "hdgst": ${hdgst:-false}, 00:12:01.411 "ddgst": ${ddgst:-false} 00:12:01.411 }, 00:12:01.411 "method": "bdev_nvme_attach_controller" 00:12:01.411 } 00:12:01.411 EOF 00:12:01.411 )") 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:01.411 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:01.411 { 00:12:01.411 "params": { 00:12:01.411 "name": "Nvme$subsystem", 00:12:01.411 "trtype": "$TEST_TRANSPORT", 00:12:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:01.411 "adrfam": "ipv4", 00:12:01.411 "trsvcid": "$NVMF_PORT", 00:12:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:01.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:01.412 "hdgst": ${hdgst:-false}, 00:12:01.412 "ddgst": ${ddgst:-false} 00:12:01.412 }, 00:12:01.412 "method": "bdev_nvme_attach_controller" 00:12:01.412 } 00:12:01.412 EOF 00:12:01.412 )") 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1169451 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:01.412 "params": { 00:12:01.412 "name": "Nvme1", 00:12:01.412 "trtype": "tcp", 00:12:01.412 "traddr": "10.0.0.2", 00:12:01.412 "adrfam": "ipv4", 00:12:01.412 "trsvcid": "4420", 00:12:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:01.412 "hdgst": false, 00:12:01.412 "ddgst": false 00:12:01.412 }, 00:12:01.412 "method": "bdev_nvme_attach_controller" 00:12:01.412 }' 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:01.412 "params": { 00:12:01.412 "name": "Nvme1", 00:12:01.412 "trtype": "tcp", 00:12:01.412 "traddr": "10.0.0.2", 00:12:01.412 "adrfam": "ipv4", 00:12:01.412 "trsvcid": "4420", 00:12:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:01.412 "hdgst": false, 00:12:01.412 "ddgst": false 00:12:01.412 }, 00:12:01.412 "method": "bdev_nvme_attach_controller" 00:12:01.412 }' 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:01.412 "params": { 00:12:01.412 "name": "Nvme1", 00:12:01.412 "trtype": "tcp", 00:12:01.412 "traddr": "10.0.0.2", 00:12:01.412 "adrfam": "ipv4", 00:12:01.412 "trsvcid": "4420", 00:12:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:01.412 "hdgst": false, 00:12:01.412 "ddgst": false 00:12:01.412 }, 00:12:01.412 "method": "bdev_nvme_attach_controller" 00:12:01.412 }' 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:01.412 10:28:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:01.412 "params": { 00:12:01.412 "name": "Nvme1", 00:12:01.412 "trtype": "tcp", 00:12:01.412 "traddr": "10.0.0.2", 00:12:01.412 "adrfam": "ipv4", 00:12:01.412 "trsvcid": "4420", 00:12:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:01.412 "hdgst": false, 00:12:01.412 "ddgst": false 00:12:01.412 }, 00:12:01.412 "method": "bdev_nvme_attach_controller" 00:12:01.412 }' 00:12:01.412 [2024-07-15 10:28:49.739242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.412 [2024-07-15 10:28:49.739243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.412 [2024-07-15 10:28:49.739244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.412 [2024-07-15 10:28:49.739332] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 10:28:49.739333] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 10:28:49.739333] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:01.412 --proc-type=auto ] 00:12:01.412 --proc-type=auto ] 00:12:01.412 [2024-07-15 10:28:49.739343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.412 [2024-07-15 10:28:49.739398] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:01.412 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.412 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.412 [2024-07-15 10:28:49.907616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.670 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.670 [2024-07-15 10:28:50.008778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:01.670 [2024-07-15 10:28:50.008866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.670 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.670 [2024-07-15 10:28:50.107935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:01.670 [2024-07-15 10:28:50.110092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.670 [2024-07-15 10:28:50.212292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:01.670 [2024-07-15 10:28:50.219453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.928 [2024-07-15 10:28:50.321395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:01.928 Running I/O for 1 seconds... 00:12:02.186 Running I/O for 1 seconds... 00:12:02.186 Running I/O for 1 seconds... 00:12:02.186 Running I/O for 1 seconds... 00:12:03.121 00:12:03.121 Latency(us) 00:12:03.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.121 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:03.121 Nvme1n1 : 1.02 7400.66 28.91 0.00 0.00 17140.96 7912.87 26796.94 00:12:03.121 =================================================================================================================== 00:12:03.121 Total : 7400.66 28.91 0.00 0.00 17140.96 7912.87 26796.94 00:12:03.121 00:12:03.121 Latency(us) 00:12:03.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.121 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:03.121 Nvme1n1 : 1.01 8877.32 34.68 0.00 0.00 14352.07 8446.86 26991.12 00:12:03.121 =================================================================================================================== 00:12:03.121 Total : 8877.32 34.68 0.00 0.00 14352.07 8446.86 26991.12 00:12:03.121 00:12:03.121 Latency(us) 00:12:03.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.121 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:03.121 Nvme1n1 : 1.01 6756.47 26.39 0.00 0.00 18872.20 7136.14 40777.96 00:12:03.121 =================================================================================================================== 00:12:03.121 Total : 6756.47 26.39 0.00 0.00 18872.20 7136.14 40777.96 00:12:03.121 00:12:03.121 Latency(us) 00:12:03.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.121 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:03.121 Nvme1n1 : 1.00 201342.16 786.49 0.00 0.00 632.92 268.52 807.06 00:12:03.121 =================================================================================================================== 00:12:03.121 Total : 201342.16 786.49 0.00 0.00 632.92 268.52 807.06 00:12:03.687 10:28:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1169453 00:12:03.687 10:28:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1169455 00:12:03.687 10:28:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1169458 00:12:03.687 10:28:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.687 10:28:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.688 rmmod nvme_tcp 00:12:03.688 rmmod nvme_fabrics 00:12:03.688 rmmod nvme_keyring 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1169421 ']' 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1169421 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1169421 ']' 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1169421 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1169421 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1169421' 00:12:03.688 killing process with pid 1169421 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1169421 00:12:03.688 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1169421 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.947 10:28:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.850 10:28:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:05.850 00:12:05.850 real 0m7.471s 00:12:05.850 user 0m17.422s 00:12:05.850 sys 0m3.532s 00:12:05.850 10:28:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.850 10:28:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.850 ************************************ 00:12:05.850 END TEST nvmf_bdev_io_wait 00:12:05.850 ************************************ 00:12:05.850 10:28:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:05.850 10:28:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:05.850 10:28:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.850 10:28:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.850 10:28:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.109 ************************************ 00:12:06.109 START TEST nvmf_queue_depth 00:12:06.109 ************************************ 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:06.109 * Looking for test storage... 00:12:06.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:06.109 10:28:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.110 10:28:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.013 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.013 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.013 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.271 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:08.272 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:08.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:08.272 Found net devices under 0000:09:00.0: cvl_0_0 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:08.272 Found net devices under 0000:09:00.1: cvl_0_1 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:08.272 00:12:08.272 --- 10.0.0.2 ping statistics --- 00:12:08.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.272 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:08.272 00:12:08.272 --- 10.0.0.1 ping statistics --- 00:12:08.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.272 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1171676 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1171676 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1171676 ']' 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.272 10:28:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.272 [2024-07-15 10:28:56.783015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:08.272 [2024-07-15 10:28:56.783086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.272 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.530 [2024-07-15 10:28:56.846425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.530 [2024-07-15 10:28:56.948242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.530 [2024-07-15 10:28:56.948287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.530 [2024-07-15 10:28:56.948302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.530 [2024-07-15 10:28:56.948313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.530 [2024-07-15 10:28:56.948339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.530 [2024-07-15 10:28:56.948363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.530 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.530 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:08.530 10:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.530 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.530 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.787 [2024-07-15 10:28:57.086088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.787 Malloc0 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.787 [2024-07-15 10:28:57.153603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.787 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1171811 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1171811 /var/tmp/bdevperf.sock 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1171811 ']' 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.788 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.788 [2024-07-15 10:28:57.196443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:08.788 [2024-07-15 10:28:57.196517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171811 ] 00:12:08.788 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.788 [2024-07-15 10:28:57.254030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.045 [2024-07-15 10:28:57.359993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.045 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.045 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:09.045 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:09.045 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.045 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.303 NVMe0n1 00:12:09.303 10:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.303 10:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:09.303 Running I/O for 10 seconds... 00:12:21.494 00:12:21.494 Latency(us) 00:12:21.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.494 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:21.494 Verification LBA range: start 0x0 length 0x4000 00:12:21.494 NVMe0n1 : 10.10 8496.40 33.19 0.00 0.00 119987.47 26796.94 70681.79 00:12:21.494 =================================================================================================================== 00:12:21.494 Total : 8496.40 33.19 0.00 0.00 119987.47 26796.94 70681.79 00:12:21.494 0 00:12:21.494 10:29:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1171811 00:12:21.494 10:29:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1171811 ']' 00:12:21.494 10:29:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1171811 00:12:21.494 10:29:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:21.494 10:29:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.494 10:29:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1171811 00:12:21.494 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.494 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.494 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1171811' 00:12:21.494 killing process with pid 1171811 00:12:21.494 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1171811 00:12:21.494 Received shutdown signal, test time was about 10.000000 seconds 00:12:21.494 00:12:21.494 Latency(us) 00:12:21.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.494 =================================================================================================================== 00:12:21.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1171811 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.495 rmmod nvme_tcp 00:12:21.495 rmmod nvme_fabrics 00:12:21.495 rmmod nvme_keyring 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1171676 ']' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1171676 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1171676 ']' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1171676 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1171676 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1171676' 00:12:21.495 killing process with pid 1171676 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1171676 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1171676 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.495 10:29:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.433 10:29:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.433 00:12:22.433 real 0m16.248s 00:12:22.433 user 0m21.739s 00:12:22.433 sys 0m3.650s 00:12:22.433 10:29:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.433 10:29:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 ************************************ 00:12:22.433 END TEST nvmf_queue_depth 00:12:22.433 ************************************ 00:12:22.433 10:29:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:22.433 10:29:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:22.433 10:29:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.433 10:29:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.433 10:29:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 ************************************ 00:12:22.433 START TEST nvmf_target_multipath 00:12:22.433 ************************************ 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:22.433 * Looking for test storage... 00:12:22.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.433 10:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.434 10:29:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.341 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:24.342 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:24.342 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:24.342 Found net devices under 0000:09:00.0: cvl_0_0 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:24.342 Found net devices under 0000:09:00.1: cvl_0_1 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.342 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:12:24.602 00:12:24.602 --- 10.0.0.2 ping statistics --- 00:12:24.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.602 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:24.602 00:12:24.602 --- 10.0.0.1 ping statistics --- 00:12:24.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.602 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:24.602 only one NIC for nvmf test 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.602 10:29:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.602 rmmod nvme_tcp 00:12:24.602 rmmod nvme_fabrics 00:12:24.602 rmmod nvme_keyring 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.602 10:29:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.507 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.767 00:12:26.767 real 0m4.360s 00:12:26.767 user 0m0.869s 00:12:26.767 sys 0m1.487s 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.767 10:29:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:26.767 ************************************ 00:12:26.767 END TEST nvmf_target_multipath 00:12:26.767 ************************************ 00:12:26.767 10:29:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:26.767 10:29:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:26.767 10:29:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:26.767 10:29:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.767 10:29:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.767 ************************************ 00:12:26.767 START TEST nvmf_zcopy 00:12:26.767 ************************************ 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:26.767 * Looking for test storage... 00:12:26.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.767 10:29:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:29.370 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:29.370 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.370 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:29.371 Found net devices under 0000:09:00.0: cvl_0_0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:29.371 Found net devices under 0000:09:00.1: cvl_0_1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:12:29.371 00:12:29.371 --- 10.0.0.2 ping statistics --- 00:12:29.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.371 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:12:29.371 00:12:29.371 --- 10.0.0.1 ping statistics --- 00:12:29.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.371 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1176897 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1176897 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1176897 ']' 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 [2024-07-15 10:29:17.537387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:29.371 [2024-07-15 10:29:17.537476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.371 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.371 [2024-07-15 10:29:17.600577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.371 [2024-07-15 10:29:17.710949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.371 [2024-07-15 10:29:17.711002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.371 [2024-07-15 10:29:17.711031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.371 [2024-07-15 10:29:17.711043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.371 [2024-07-15 10:29:17.711053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.371 [2024-07-15 10:29:17.711085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 [2024-07-15 10:29:17.852364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 [2024-07-15 10:29:17.868532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 malloc0 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:29.371 { 00:12:29.371 "params": { 00:12:29.371 "name": "Nvme$subsystem", 00:12:29.371 "trtype": "$TEST_TRANSPORT", 00:12:29.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:29.371 "adrfam": "ipv4", 00:12:29.371 "trsvcid": "$NVMF_PORT", 00:12:29.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:29.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:29.371 "hdgst": ${hdgst:-false}, 00:12:29.371 "ddgst": ${ddgst:-false} 00:12:29.371 }, 00:12:29.371 "method": "bdev_nvme_attach_controller" 00:12:29.371 } 00:12:29.371 EOF 00:12:29.371 )") 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:29.371 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:29.372 10:29:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:29.372 "params": { 00:12:29.372 "name": "Nvme1", 00:12:29.372 "trtype": "tcp", 00:12:29.372 "traddr": "10.0.0.2", 00:12:29.372 "adrfam": "ipv4", 00:12:29.372 "trsvcid": "4420", 00:12:29.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:29.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:29.372 "hdgst": false, 00:12:29.372 "ddgst": false 00:12:29.372 }, 00:12:29.372 "method": "bdev_nvme_attach_controller" 00:12:29.372 }' 00:12:29.629 [2024-07-15 10:29:17.951709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:29.629 [2024-07-15 10:29:17.951813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177018 ] 00:12:29.629 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.629 [2024-07-15 10:29:18.015354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.629 [2024-07-15 10:29:18.124115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.208 Running I/O for 10 seconds... 00:12:40.184 00:12:40.184 Latency(us) 00:12:40.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.184 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:40.184 Verification LBA range: start 0x0 length 0x1000 00:12:40.184 Nvme1n1 : 10.02 5555.82 43.40 0.00 0.00 22978.92 3203.98 33204.91 00:12:40.184 =================================================================================================================== 00:12:40.184 Total : 5555.82 43.40 0.00 0.00 22978.92 3203.98 33204.91 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1178213 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:40.443 { 00:12:40.443 "params": { 00:12:40.443 "name": "Nvme$subsystem", 00:12:40.443 "trtype": "$TEST_TRANSPORT", 00:12:40.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:40.443 "adrfam": "ipv4", 00:12:40.443 "trsvcid": "$NVMF_PORT", 00:12:40.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:40.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:40.443 "hdgst": ${hdgst:-false}, 00:12:40.443 "ddgst": ${ddgst:-false} 00:12:40.443 }, 00:12:40.443 "method": "bdev_nvme_attach_controller" 00:12:40.443 } 00:12:40.443 EOF 00:12:40.443 )") 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:40.443 [2024-07-15 10:29:28.781037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.781093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:40.443 10:29:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:40.443 "params": { 00:12:40.443 "name": "Nvme1", 00:12:40.443 "trtype": "tcp", 00:12:40.443 "traddr": "10.0.0.2", 00:12:40.443 "adrfam": "ipv4", 00:12:40.443 "trsvcid": "4420", 00:12:40.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.443 "hdgst": false, 00:12:40.443 "ddgst": false 00:12:40.443 }, 00:12:40.443 "method": "bdev_nvme_attach_controller" 00:12:40.443 }' 00:12:40.443 [2024-07-15 10:29:28.789003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.789027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.797022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.797044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.805041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.805062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.813062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.813083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.815709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:40.443 [2024-07-15 10:29:28.815778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178213 ] 00:12:40.443 [2024-07-15 10:29:28.821099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.821119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.829118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.829140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.837143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.837178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.443 [2024-07-15 10:29:28.845171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.845192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.853191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.853213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.861213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.861233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.869233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.869255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.877253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.877275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.878575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.443 [2024-07-15 10:29:28.885302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.885330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.893341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.893375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.901316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.901337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.909338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.909364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.917362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.917383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.925385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.925407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.933408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.933428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.941445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.941473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.949485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.949521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.957473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.957494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.965494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.965515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.443 [2024-07-15 10:29:28.973515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.443 [2024-07-15 10:29:28.973536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.444 [2024-07-15 10:29:28.981535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.444 [2024-07-15 10:29:28.981555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.444 [2024-07-15 10:29:28.989568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.444 [2024-07-15 10:29:28.989591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:28.997586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:28.997615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:28.997890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.702 [2024-07-15 10:29:29.005601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.005622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.013647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.013678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.021672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.021704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.029695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.029729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.037720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.037755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.045744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.045778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.053760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.053815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.061795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.061838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.069776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.069819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.077848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.077879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.085872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.085907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.093875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.093901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.101881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.101901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.109912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.109933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.117919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.117939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.125949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.125973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.133968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.133991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.141991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.142014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.150015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.150039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.158039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.158062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.166058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.166080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.174082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.174102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.182103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.182137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.190140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.190159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.198165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.198186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.206185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.206206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.214201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.214220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.222222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.222242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.230245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.230265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.238270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.238290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.702 [2024-07-15 10:29:29.246298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.702 [2024-07-15 10:29:29.246320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.254345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.254370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.262341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.262363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.270362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.270383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.278384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.278404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.286408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.286429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.294437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.294464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.302459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.302483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.310475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.310496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 Running I/O for 5 seconds... 00:12:40.961 [2024-07-15 10:29:29.318498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.318518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.331691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.331720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.345491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.345519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.355987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.356015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.366811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.366838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.380016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.380044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.389846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.389873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.400588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.400615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.410701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.410728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.421080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.421107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.431860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.431887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.443699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.443727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.452744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.452771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.463958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.463985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.474596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.474622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.484881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.484908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.495715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.495749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.961 [2024-07-15 10:29:29.506728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.961 [2024-07-15 10:29:29.506756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.519135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.519162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.528341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.528368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.539553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.539580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.549756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.549783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.560378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.560405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.573526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.573553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.583539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.583566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.593976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.594003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.604417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.604445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.614497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.614526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.624951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.624978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.635971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.635997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.220 [2024-07-15 10:29:29.646461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.220 [2024-07-15 10:29:29.646488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.658554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.658581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.667935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.667962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.678724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.678751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.689172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.689199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.699491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.699524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.709791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.709825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.720367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.720393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.734044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.734072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.744283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.744309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.754535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.754562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.221 [2024-07-15 10:29:29.765120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.221 [2024-07-15 10:29:29.765148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.775752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.775780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.786429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.786456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.798742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.798768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.808904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.808931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.819481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.819508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.830108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.830135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.840651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.840678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.851249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.851276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.861733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.861760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.873912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.873939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.883915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.883943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.894548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.894575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.905344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.905377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.916067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.916094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.928634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.928661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.940061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.940088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.949127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.949154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.960530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.960557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.972923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.972951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.982916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.982943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:29.993493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:29.993520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:30.004204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:30.004244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:30.015388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:30.015421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.480 [2024-07-15 10:29:30.026095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.480 [2024-07-15 10:29:30.026126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.037293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.037324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.048137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.048165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.061031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.061059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.072868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.072895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.081831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.081858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.093401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.093428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.105899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.105926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.115949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.115976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.126450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.126478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.138774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.138809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.148640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.148667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.159393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.159419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.169944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.169971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.182266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.182292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.192608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.192635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.203237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.203264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.215859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.215886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.226020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.226047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.236400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.236427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.246453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.246481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.256547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.256574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.266533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.266560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.277125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.277152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.740 [2024-07-15 10:29:30.287565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.740 [2024-07-15 10:29:30.287593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.298879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.298907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.311428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.311455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.321778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.321813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.332684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.332721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.345149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.345177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.355169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.355196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.365632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.365660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.375752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.375778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.386262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.386289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.398991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.399018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.411136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.411163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.420648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.420675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.431693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.431720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.442511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.442538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.452723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.452750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.462823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.462850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.473082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.473109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.483441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.483467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.493751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.493778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.504167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.504193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.514457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.514484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.525070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.525097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.535499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.535526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.999 [2024-07-15 10:29:30.546237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.999 [2024-07-15 10:29:30.546265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.557034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.557061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.569302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.569329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.579491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.579518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.589579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.589606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.599779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.599815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.610591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.610618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.622949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.622979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.632836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.632863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.645030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.645057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.656789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.656827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.666181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.666208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.676413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.676440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.687342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.687369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.697575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.697602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.707696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.707723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.717997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.256 [2024-07-15 10:29:30.718030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.256 [2024-07-15 10:29:30.728517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.728544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.257 [2024-07-15 10:29:30.741372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.741398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.257 [2024-07-15 10:29:30.751635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.751662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.257 [2024-07-15 10:29:30.762262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.762289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.257 [2024-07-15 10:29:30.774858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.774885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.257 [2024-07-15 10:29:30.784551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.784577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.257 [2024-07-15 10:29:30.795205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.257 [2024-07-15 10:29:30.795232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.807666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.807694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.819129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.819157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.828080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.828107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.839492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.839519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.850037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.850065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.860690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.860717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.871284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.871311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.882150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.882177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.894360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.894386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.904132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.904159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.915228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.915254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.927679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.927712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.937699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.937726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.948564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.948591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.959082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.959108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.969862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.969889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.980331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.980358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:30.990639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:30.990666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:31.001222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:31.001249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:31.011785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:31.011820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:31.024136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:31.024163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:31.034070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:31.034097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:31.044648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:31.044675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.515 [2024-07-15 10:29:31.055154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.515 [2024-07-15 10:29:31.055181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.067753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.067781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.077512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.077539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.088368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.088395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.098862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.098890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.109508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.109535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.120430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.120459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.131109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.131143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.143866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.143893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.154197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.154224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.164435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.164463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.174979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.175007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.185236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.185264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.195171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.195199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.205602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.205629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.772 [2024-07-15 10:29:31.218374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.772 [2024-07-15 10:29:31.218402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.228501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.228529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.238847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.238875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.248841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.248868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.258847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.258874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.269472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.269499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.282088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.282115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.293795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.293832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.303237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.303264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.773 [2024-07-15 10:29:31.314570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.773 [2024-07-15 10:29:31.314597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.325903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.325930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.338826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.338860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.349149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.349176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.359905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.359932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.372212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.372239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.383934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.383960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.393608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.393635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.403999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.404026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.414480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.414508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.424687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.424715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.435185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.435212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.445570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.445596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.457966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.457993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.467857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.467884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.478247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.478274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.489257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.489284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.499762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.499789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.510393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.510421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.522473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.522500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.532442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.532469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.542759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.542793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.553135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.553162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.563166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.563192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.030 [2024-07-15 10:29:31.573353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.030 [2024-07-15 10:29:31.573380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.288 [2024-07-15 10:29:31.583970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.288 [2024-07-15 10:29:31.583997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.288 [2024-07-15 10:29:31.596501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.288 [2024-07-15 10:29:31.596528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.288 [2024-07-15 10:29:31.606460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.288 [2024-07-15 10:29:31.606486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.288 [2024-07-15 10:29:31.616919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.616945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.627602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.627632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.638280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.638309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.648849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.648887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.661354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.661382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.671500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.671527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.681849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.681875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.692300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.692328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.703046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.703073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.713323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.713350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.723308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.723335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.733852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.733879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.746459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.746486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.758965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.758992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.768206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.768232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.778840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.778867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.789559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.789585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.799462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.799490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.809917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.809944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.820290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.820317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.289 [2024-07-15 10:29:31.830841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.289 [2024-07-15 10:29:31.830868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.841586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.841614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.854060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.854088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.864138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.864166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.874168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.874194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.884682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.884708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.895220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.895247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.905635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.905662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.915709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.915735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.926228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.926255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.936986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.937013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.947391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.947419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.958113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.958141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.968541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.968568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.979203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.979230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:31.989629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:31.989656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.000129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.000156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.010747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.010774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.024058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.024085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.036140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.036167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.045753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.045780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.055872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.055898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.066342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.066369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.076816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.076843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.547 [2024-07-15 10:29:32.087336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.547 [2024-07-15 10:29:32.087363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.097988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.098017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.110402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.110429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.119714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.119742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.132123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.132151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.142160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.142186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.152539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.152566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.163042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.163069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.173611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.173638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.185894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.185921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.195763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.195789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.206289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.206316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.216467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.216494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.226927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.226954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.237407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.237435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.247989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.248017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.260339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.260367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.270348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.270376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.280621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.280648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.291188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.291215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.304648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.304676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.314778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.314814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.325237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.325265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.335597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.335625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.806 [2024-07-15 10:29:32.346284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.806 [2024-07-15 10:29:32.346316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.064 [2024-07-15 10:29:32.359344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.064 [2024-07-15 10:29:32.359372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.369175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.369203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.380134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.380161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.393144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.393171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.404924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.404951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.414268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.414296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.425061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.425088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.437834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.437862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.448368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.448394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.458551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.458578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.469174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.469202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.480122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.480161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.490628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.490656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.503861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.503888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.514198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.514225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.524646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.524673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.535325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.535351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.545530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.545557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.555968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.556002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.566252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.566279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.576792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.576828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.587248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.587274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.597480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.597507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.065 [2024-07-15 10:29:32.607808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.065 [2024-07-15 10:29:32.607834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.618586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.618614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.629196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.629223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.639872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.639899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.650368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.650396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.662923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.662959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.673011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.673038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.683301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.683328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.693531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.693558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.704007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.704034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.714506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.714533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.724683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.724710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.734896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.734924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.744978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.745005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.755496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.755530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.765798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.765833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.776263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.776290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.787365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.787392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.800219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.800246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.810461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.810487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.821223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.821250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.833505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.833532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.843370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.843397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.853518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.853545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.323 [2024-07-15 10:29:32.864225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.323 [2024-07-15 10:29:32.864251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.875108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.875136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.885515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.885542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.896296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.896324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.908388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.908415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.918001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.918027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.928266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.928294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.938736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.938763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.949289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.949316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.959970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.960003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.972257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.972298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.982265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.982291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:32.992723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:32.992750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.003102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.003129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.013569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.013597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.024139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.024166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.034535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.034561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.044775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.044809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.055267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.055294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.067435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.067461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.077445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.077472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.087949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.087976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.098944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.098971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.109207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.109234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.119140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.119167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.581 [2024-07-15 10:29:33.129746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.581 [2024-07-15 10:29:33.129773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.143518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.143546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.153455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.153483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.163864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.163898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.174534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.174561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.185020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.185047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.197284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.197311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.207336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.207363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.217840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.217867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.228170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.228197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.238344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.238370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.248447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.248473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.258985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.259012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.271236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.271263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.280092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.280119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.292825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.292851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.304696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.304723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.313593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.313620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.325582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.325608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.336238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.336265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.347246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.347273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.360082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.360107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.370557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.370584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.838 [2024-07-15 10:29:33.380989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.838 [2024-07-15 10:29:33.381017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.392022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.392050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.402585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.402613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.415765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.415793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.427535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.427562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.436507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.436534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.447974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.448001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.458519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.458546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.478776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.478815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.489245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.489272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.499933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.499961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.510586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.510613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.521166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.521193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.532179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.532206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.544633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.544660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.095 [2024-07-15 10:29:33.556108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.095 [2024-07-15 10:29:33.556136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.565206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.565233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.576811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.576837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.589454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.589481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.599339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.599366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.609928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.609955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.623093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.623120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.633115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.633142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.096 [2024-07-15 10:29:33.643844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.096 [2024-07-15 10:29:33.643872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.656463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.656491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.666231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.666258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.676876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.676903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.687516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.687543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.699732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.699759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.709247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.709274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.720090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.720117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.730725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.730753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.741228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.741255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.751933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.751960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.762353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.762380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.773044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.773071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.783655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.783681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.794332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.794358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.806663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.806690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.816743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.816770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.827628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.827654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.838341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.838367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.849245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.849272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.861970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.861997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.872133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.872159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.882862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.882888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.354 [2024-07-15 10:29:33.895301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.354 [2024-07-15 10:29:33.895329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.905737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.905765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.916283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.916311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.926428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.926455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.936846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.936873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.947422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.947449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.957977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.958004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.968430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.968456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.978975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.979003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.989266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.989299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:33.999622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:33.999648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.010088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.010115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.020573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.020600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.030948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.030976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.041463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.041489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.052161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.052188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.062473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.062500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.073122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.073149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.083527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.083554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.094086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.094113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.104829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.104856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.115090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.115117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.125839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.125866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.136505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.136532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.149025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.149052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.613 [2024-07-15 10:29:34.161957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.613 [2024-07-15 10:29:34.161985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.171131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.171159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.183465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.183491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.193581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.193614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.203901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.203927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.214340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.214366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.224943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.224969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.237433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.237461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.249830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.249857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.259541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.259568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.270103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.270131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.282615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.282642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.291454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.291481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.302612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.302639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.312900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.312927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.323497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.871 [2024-07-15 10:29:34.323523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.871 [2024-07-15 10:29:34.333265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.333292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 00:12:45.872 Latency(us) 00:12:45.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.872 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:45.872 Nvme1n1 : 5.01 12085.07 94.41 0.00 0.00 10577.71 4587.52 21942.42 00:12:45.872 =================================================================================================================== 00:12:45.872 Total : 12085.07 94.41 0.00 0.00 10577.71 4587.52 21942.42 00:12:45.872 [2024-07-15 10:29:34.337872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.337895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.345898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.345924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.353907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.353935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.361977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.362019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.370006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.370050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.378018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.378060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.386038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.386079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.394058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.394099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.402089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.402131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.410106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.410146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.872 [2024-07-15 10:29:34.418138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.872 [2024-07-15 10:29:34.418184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.426167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.426213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.434187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.434232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.442203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.442247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.450223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.450264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.458236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.458276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.466265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.466307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.474282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.474322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.482265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.482289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.490276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.490297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.498296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.498317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.506317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.506346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.514346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.514369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.522422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.522466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.530434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.530477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.538407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.538428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.546426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.546446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.554449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.554469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.562470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.562489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.570527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.570561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.578562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.578605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.586570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.130 [2024-07-15 10:29:34.586602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.130 [2024-07-15 10:29:34.594557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.131 [2024-07-15 10:29:34.594576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.131 [2024-07-15 10:29:34.602577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.131 [2024-07-15 10:29:34.602596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1178213) - No such process 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1178213 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:46.131 delay0 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.131 10:29:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:46.131 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.388 [2024-07-15 10:29:34.722019] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:52.940 Initializing NVMe Controllers 00:12:52.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:52.940 Initialization complete. Launching workers. 00:12:52.940 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:12:52.940 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:12:52.940 success 181, unsuccess 191, failed 0 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.940 rmmod nvme_tcp 00:12:52.940 rmmod nvme_fabrics 00:12:52.940 rmmod nvme_keyring 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1176897 ']' 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1176897 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1176897 ']' 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1176897 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1176897 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1176897' 00:12:52.940 killing process with pid 1176897 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1176897 00:12:52.940 10:29:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1176897 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.940 10:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.846 10:29:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.846 00:12:54.846 real 0m28.043s 00:12:54.846 user 0m40.291s 00:12:54.846 sys 0m8.734s 00:12:54.846 10:29:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.846 10:29:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:54.846 ************************************ 00:12:54.846 END TEST nvmf_zcopy 00:12:54.846 ************************************ 00:12:54.846 10:29:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:54.846 10:29:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:54.846 10:29:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.846 10:29:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.846 10:29:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.846 ************************************ 00:12:54.846 START TEST nvmf_nmic 00:12:54.846 ************************************ 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:54.846 * Looking for test storage... 00:12:54.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.846 10:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:57.376 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.376 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:57.376 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:57.377 Found net devices under 0000:09:00.0: cvl_0_0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:57.377 Found net devices under 0000:09:00.1: cvl_0_1 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:57.377 00:12:57.377 --- 10.0.0.2 ping statistics --- 00:12:57.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.377 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:57.377 00:12:57.377 --- 10.0.0.1 ping statistics --- 00:12:57.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.377 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1181595 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1181595 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1181595 ']' 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 [2024-07-15 10:29:45.558593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:57.377 [2024-07-15 10:29:45.558662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.377 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.377 [2024-07-15 10:29:45.619271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.377 [2024-07-15 10:29:45.720717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.377 [2024-07-15 10:29:45.720777] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.377 [2024-07-15 10:29:45.720790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.377 [2024-07-15 10:29:45.720806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.377 [2024-07-15 10:29:45.720826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.377 [2024-07-15 10:29:45.720914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.377 [2024-07-15 10:29:45.720972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.377 [2024-07-15 10:29:45.721047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.377 [2024-07-15 10:29:45.721049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 [2024-07-15 10:29:45.877673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 Malloc0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.377 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 [2024-07-15 10:29:45.931113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:57.635 test case1: single bdev can't be used in multiple subsystems 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.635 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 [2024-07-15 10:29:45.954954] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:57.635 [2024-07-15 10:29:45.954985] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:57.635 [2024-07-15 10:29:45.954999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.635 request: 00:12:57.635 { 00:12:57.635 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:57.635 "namespace": { 00:12:57.635 "bdev_name": "Malloc0", 00:12:57.635 "no_auto_visible": false 00:12:57.635 }, 00:12:57.635 "method": "nvmf_subsystem_add_ns", 00:12:57.635 "req_id": 1 00:12:57.635 } 00:12:57.635 Got JSON-RPC error response 00:12:57.635 response: 00:12:57.635 { 00:12:57.635 "code": -32602, 00:12:57.635 "message": "Invalid parameters" 00:12:57.635 } 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:57.636 Adding namespace failed - expected result. 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:57.636 test case2: host connect to nvmf target in multiple paths 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.636 [2024-07-15 10:29:45.963050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.636 10:29:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.201 10:29:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:58.766 10:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.766 10:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.766 10:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.766 10:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:58.766 10:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:00.661 10:29:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:00.661 [global] 00:13:00.661 thread=1 00:13:00.661 invalidate=1 00:13:00.661 rw=write 00:13:00.661 time_based=1 00:13:00.661 runtime=1 00:13:00.661 ioengine=libaio 00:13:00.661 direct=1 00:13:00.661 bs=4096 00:13:00.661 iodepth=1 00:13:00.661 norandommap=0 00:13:00.661 numjobs=1 00:13:00.661 00:13:00.661 verify_dump=1 00:13:00.661 verify_backlog=512 00:13:00.661 verify_state_save=0 00:13:00.661 do_verify=1 00:13:00.661 verify=crc32c-intel 00:13:00.661 [job0] 00:13:00.661 filename=/dev/nvme0n1 00:13:00.661 Could not set queue depth (nvme0n1) 00:13:00.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.918 fio-3.35 00:13:00.918 Starting 1 thread 00:13:02.331 00:13:02.331 job0: (groupid=0, jobs=1): err= 0: pid=1182117: Mon Jul 15 10:29:50 2024 00:13:02.331 read: IOPS=2164, BW=8659KiB/s (8867kB/s)(8668KiB/1001msec) 00:13:02.331 slat (nsec): min=5348, max=54657, avg=9929.90, stdev=6100.85 00:13:02.331 clat (usec): min=169, max=650, avg=208.56, stdev=24.65 00:13:02.331 lat (usec): min=174, max=655, avg=218.49, stdev=27.83 00:13:02.331 clat percentiles (usec): 00:13:02.331 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:13:02.331 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:13:02.331 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 260], 00:13:02.331 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 326], 00:13:02.331 | 99.99th=[ 652] 00:13:02.331 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:02.331 slat (usec): min=7, max=29250, avg=25.11, stdev=577.87 00:13:02.331 clat (usec): min=124, max=1761, avg=174.35, stdev=52.57 00:13:02.331 lat (usec): min=132, max=29452, avg=199.46, stdev=581.02 00:13:02.331 clat percentiles (usec): 00:13:02.331 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:13:02.331 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 163], 00:13:02.331 | 70.00th=[ 188], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 247], 00:13:02.331 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 603], 00:13:02.331 | 99.99th=[ 1762] 00:13:02.332 bw ( KiB/s): min=11232, max=11232, per=100.00%, avg=11232.00, stdev= 0.00, samples=1 00:13:02.332 iops : min= 2808, max= 2808, avg=2808.00, stdev= 0.00, samples=1 00:13:02.332 lat (usec) : 250=94.99%, 500=4.95%, 750=0.04% 00:13:02.332 lat (msec) : 2=0.02% 00:13:02.332 cpu : usr=3.20%, sys=6.80%, ctx=4731, majf=0, minf=2 00:13:02.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.332 issued rwts: total=2167,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.332 00:13:02.332 Run status group 0 (all jobs): 00:13:02.332 READ: bw=8659KiB/s (8867kB/s), 8659KiB/s-8659KiB/s (8867kB/s-8867kB/s), io=8668KiB (8876kB), run=1001-1001msec 00:13:02.332 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:13:02.332 00:13:02.332 Disk stats (read/write): 00:13:02.332 nvme0n1: ios=2074/2241, merge=0/0, ticks=1403/358, in_queue=1761, util=98.70% 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.332 rmmod nvme_tcp 00:13:02.332 rmmod nvme_fabrics 00:13:02.332 rmmod nvme_keyring 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1181595 ']' 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1181595 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1181595 ']' 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1181595 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181595 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181595' 00:13:02.332 killing process with pid 1181595 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1181595 00:13:02.332 10:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1181595 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.617 10:29:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.152 10:29:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.152 00:13:05.152 real 0m9.954s 00:13:05.152 user 0m22.242s 00:13:05.152 sys 0m2.520s 00:13:05.152 10:29:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.152 10:29:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:05.152 ************************************ 00:13:05.152 END TEST nvmf_nmic 00:13:05.152 ************************************ 00:13:05.152 10:29:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:05.152 10:29:53 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:05.152 10:29:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:05.152 10:29:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.152 10:29:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:05.152 ************************************ 00:13:05.152 START TEST nvmf_fio_target 00:13:05.152 ************************************ 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:05.152 * Looking for test storage... 00:13:05.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.152 10:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:07.053 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:07.053 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:07.053 Found net devices under 0000:09:00.0: cvl_0_0 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:07.053 Found net devices under 0000:09:00.1: cvl_0_1 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:13:07.053 00:13:07.053 --- 10.0.0.2 ping statistics --- 00:13:07.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.053 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:07.053 00:13:07.053 --- 10.0.0.1 ping statistics --- 00:13:07.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.053 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.053 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1184310 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1184310 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1184310 ']' 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.311 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.311 [2024-07-15 10:29:55.653930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:07.311 [2024-07-15 10:29:55.654018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.311 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.311 [2024-07-15 10:29:55.716400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.311 [2024-07-15 10:29:55.823475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.311 [2024-07-15 10:29:55.823521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.311 [2024-07-15 10:29:55.823544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.311 [2024-07-15 10:29:55.823555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.311 [2024-07-15 10:29:55.823565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.311 [2024-07-15 10:29:55.823652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.311 [2024-07-15 10:29:55.823717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.311 [2024-07-15 10:29:55.823784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.311 [2024-07-15 10:29:55.823787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.569 10:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:07.826 [2024-07-15 10:29:56.258563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.826 10:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.083 10:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:08.083 10:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.341 10:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:08.341 10:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.599 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:08.599 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.165 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:09.165 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:09.165 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.423 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:09.423 10:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.988 10:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:09.988 10:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:10.246 10:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:10.246 10:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:10.503 10:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.760 10:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:10.760 10:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:10.760 10:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:10.760 10:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.017 10:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.274 [2024-07-15 10:29:59.772913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.274 10:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:11.532 10:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:11.789 10:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.719 10:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:12.719 10:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.719 10:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.719 10:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:12.719 10:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:12.719 10:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.613 10:30:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.613 10:30:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.613 10:30:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.613 10:30:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:14.613 10:30:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.613 10:30:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:14.613 10:30:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:14.613 [global] 00:13:14.613 thread=1 00:13:14.613 invalidate=1 00:13:14.613 rw=write 00:13:14.613 time_based=1 00:13:14.613 runtime=1 00:13:14.613 ioengine=libaio 00:13:14.613 direct=1 00:13:14.613 bs=4096 00:13:14.613 iodepth=1 00:13:14.613 norandommap=0 00:13:14.613 numjobs=1 00:13:14.613 00:13:14.613 verify_dump=1 00:13:14.613 verify_backlog=512 00:13:14.613 verify_state_save=0 00:13:14.613 do_verify=1 00:13:14.613 verify=crc32c-intel 00:13:14.613 [job0] 00:13:14.613 filename=/dev/nvme0n1 00:13:14.613 [job1] 00:13:14.613 filename=/dev/nvme0n2 00:13:14.613 [job2] 00:13:14.613 filename=/dev/nvme0n3 00:13:14.613 [job3] 00:13:14.613 filename=/dev/nvme0n4 00:13:14.613 Could not set queue depth (nvme0n1) 00:13:14.613 Could not set queue depth (nvme0n2) 00:13:14.613 Could not set queue depth (nvme0n3) 00:13:14.613 Could not set queue depth (nvme0n4) 00:13:14.870 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.870 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.870 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.870 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.870 fio-3.35 00:13:14.870 Starting 4 threads 00:13:16.240 00:13:16.240 job0: (groupid=0, jobs=1): err= 0: pid=1185496: Mon Jul 15 10:30:04 2024 00:13:16.240 read: IOPS=1489, BW=5959KiB/s (6102kB/s)(6168KiB/1035msec) 00:13:16.240 slat (nsec): min=5815, max=40973, avg=12064.92, stdev=5537.70 00:13:16.240 clat (usec): min=189, max=41901, avg=390.45, stdev=2323.45 00:13:16.240 lat (usec): min=206, max=41923, avg=402.52, stdev=2323.69 00:13:16.240 clat percentiles (usec): 00:13:16.240 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:13:16.240 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:13:16.240 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 318], 00:13:16.240 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[41157], 99.95th=[41681], 00:13:16.240 | 99.99th=[41681] 00:13:16.240 write: IOPS=1978, BW=7915KiB/s (8105kB/s)(8192KiB/1035msec); 0 zone resets 00:13:16.240 slat (nsec): min=7363, max=60688, avg=14768.12, stdev=6570.21 00:13:16.240 clat (usec): min=135, max=908, avg=180.84, stdev=27.96 00:13:16.240 lat (usec): min=144, max=917, avg=195.61, stdev=30.63 00:13:16.240 clat percentiles (usec): 00:13:16.240 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:13:16.240 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:13:16.240 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 221], 00:13:16.240 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 437], 00:13:16.240 | 99.99th=[ 906] 00:13:16.240 bw ( KiB/s): min= 8192, max= 8192, per=41.40%, avg=8192.00, stdev= 0.00, samples=2 00:13:16.240 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:13:16.240 lat (usec) : 250=77.08%, 500=22.76%, 1000=0.03% 00:13:16.240 lat (msec) : 50=0.14% 00:13:16.240 cpu : usr=4.26%, sys=5.80%, ctx=3590, majf=0, minf=1 00:13:16.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.240 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.240 job1: (groupid=0, jobs=1): err= 0: pid=1185497: Mon Jul 15 10:30:04 2024 00:13:16.240 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:13:16.240 slat (nsec): min=8851, max=13747, avg=12907.19, stdev=981.19 00:13:16.240 clat (usec): min=40889, max=42052, avg=41187.83, stdev=420.23 00:13:16.240 lat (usec): min=40902, max=42065, avg=41200.74, stdev=420.13 00:13:16.240 clat percentiles (usec): 00:13:16.240 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:16.240 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:16.240 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:16.240 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:16.240 | 99.99th=[42206] 00:13:16.240 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:13:16.240 slat (usec): min=8, max=12587, avg=40.99, stdev=555.61 00:13:16.240 clat (usec): min=144, max=441, avg=232.15, stdev=35.15 00:13:16.240 lat (usec): min=154, max=12870, avg=273.14, stdev=558.83 00:13:16.240 clat percentiles (usec): 00:13:16.240 | 1.00th=[ 153], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 204], 00:13:16.240 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:13:16.240 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:13:16.240 | 99.00th=[ 338], 99.50th=[ 383], 99.90th=[ 441], 99.95th=[ 441], 00:13:16.240 | 99.99th=[ 441] 00:13:16.240 bw ( KiB/s): min= 4096, max= 4096, per=20.70%, avg=4096.00, stdev= 0.00, samples=1 00:13:16.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:16.240 lat (usec) : 250=70.17%, 500=25.89% 00:13:16.240 lat (msec) : 50=3.94% 00:13:16.240 cpu : usr=0.70%, sys=0.89%, ctx=537, majf=0, minf=2 00:13:16.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.240 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.240 job2: (groupid=0, jobs=1): err= 0: pid=1185499: Mon Jul 15 10:30:04 2024 00:13:16.240 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:13:16.240 slat (nsec): min=7890, max=34711, avg=14300.86, stdev=4730.05 00:13:16.240 clat (usec): min=40894, max=42009, avg=41116.17, stdev=356.01 00:13:16.240 lat (usec): min=40902, max=42023, avg=41130.47, stdev=355.58 00:13:16.240 clat percentiles (usec): 00:13:16.240 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:16.240 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:16.240 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:16.240 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:16.240 | 99.99th=[42206] 00:13:16.240 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:13:16.240 slat (nsec): min=8582, max=64488, avg=15027.98, stdev=7467.83 00:13:16.240 clat (usec): min=151, max=263, avg=189.03, stdev=21.17 00:13:16.240 lat (usec): min=163, max=296, avg=204.06, stdev=24.88 00:13:16.240 clat percentiles (usec): 00:13:16.240 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:13:16.240 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:13:16.240 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 231], 00:13:16.240 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 265], 99.95th=[ 265], 00:13:16.240 | 99.99th=[ 265] 00:13:16.240 bw ( KiB/s): min= 4096, max= 4096, per=20.70%, avg=4096.00, stdev= 0.00, samples=1 00:13:16.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:16.240 lat (usec) : 250=93.82%, 500=2.06% 00:13:16.240 lat (msec) : 50=4.12% 00:13:16.240 cpu : usr=0.69%, sys=0.69%, ctx=535, majf=0, minf=1 00:13:16.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.241 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.241 job3: (groupid=0, jobs=1): err= 0: pid=1185500: Mon Jul 15 10:30:04 2024 00:13:16.241 read: IOPS=1511, BW=6045KiB/s (6190kB/s)(6160KiB/1019msec) 00:13:16.241 slat (nsec): min=5912, max=66581, avg=11931.65, stdev=5869.08 00:13:16.241 clat (usec): min=188, max=41934, avg=356.04, stdev=2085.54 00:13:16.241 lat (usec): min=194, max=41948, avg=367.98, stdev=2085.65 00:13:16.241 clat percentiles (usec): 00:13:16.241 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:13:16.241 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:13:16.241 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:13:16.241 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[41157], 99.95th=[41681], 00:13:16.241 | 99.99th=[41681] 00:13:16.241 write: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec); 0 zone resets 00:13:16.241 slat (nsec): min=6637, max=57059, avg=15260.19, stdev=7030.92 00:13:16.241 clat (usec): min=133, max=1080, avg=198.30, stdev=53.70 00:13:16.241 lat (usec): min=141, max=1097, avg=213.56, stdev=54.84 00:13:16.241 clat percentiles (usec): 00:13:16.241 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:13:16.241 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:13:16.241 | 70.00th=[ 206], 80.00th=[ 221], 90.00th=[ 243], 95.00th=[ 262], 00:13:16.241 | 99.00th=[ 363], 99.50th=[ 424], 99.90th=[ 947], 99.95th=[ 1057], 00:13:16.241 | 99.99th=[ 1074] 00:13:16.241 bw ( KiB/s): min= 8192, max= 8192, per=41.40%, avg=8192.00, stdev= 0.00, samples=2 00:13:16.241 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:13:16.241 lat (usec) : 250=75.25%, 500=24.41%, 750=0.06%, 1000=0.11% 00:13:16.241 lat (msec) : 2=0.06%, 50=0.11% 00:13:16.241 cpu : usr=3.83%, sys=6.09%, ctx=3588, majf=0, minf=1 00:13:16.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.241 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.241 00:13:16.241 Run status group 0 (all jobs): 00:13:16.241 READ: bw=11.8MiB/s (12.4MB/s), 83.3KiB/s-6045KiB/s (85.3kB/s-6190kB/s), io=12.2MiB (12.8MB), run=1008-1035msec 00:13:16.241 WRITE: bw=19.3MiB/s (20.3MB/s), 2024KiB/s-8039KiB/s (2072kB/s-8232kB/s), io=20.0MiB (21.0MB), run=1008-1035msec 00:13:16.241 00:13:16.241 Disk stats (read/write): 00:13:16.241 nvme0n1: ios=1586/2048, merge=0/0, ticks=399/359, in_queue=758, util=85.77% 00:13:16.241 nvme0n2: ios=39/512, merge=0/0, ticks=1626/100, in_queue=1726, util=97.23% 00:13:16.241 nvme0n3: ios=74/512, merge=0/0, ticks=1682/95, in_queue=1777, util=97.15% 00:13:16.241 nvme0n4: ios=1536/1973, merge=0/0, ticks=359/371, in_queue=730, util=89.47% 00:13:16.241 10:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:16.241 [global] 00:13:16.241 thread=1 00:13:16.241 invalidate=1 00:13:16.241 rw=randwrite 00:13:16.241 time_based=1 00:13:16.241 runtime=1 00:13:16.241 ioengine=libaio 00:13:16.241 direct=1 00:13:16.241 bs=4096 00:13:16.241 iodepth=1 00:13:16.241 norandommap=0 00:13:16.241 numjobs=1 00:13:16.241 00:13:16.241 verify_dump=1 00:13:16.241 verify_backlog=512 00:13:16.241 verify_state_save=0 00:13:16.241 do_verify=1 00:13:16.241 verify=crc32c-intel 00:13:16.241 [job0] 00:13:16.241 filename=/dev/nvme0n1 00:13:16.241 [job1] 00:13:16.241 filename=/dev/nvme0n2 00:13:16.241 [job2] 00:13:16.241 filename=/dev/nvme0n3 00:13:16.241 [job3] 00:13:16.241 filename=/dev/nvme0n4 00:13:16.241 Could not set queue depth (nvme0n1) 00:13:16.241 Could not set queue depth (nvme0n2) 00:13:16.241 Could not set queue depth (nvme0n3) 00:13:16.241 Could not set queue depth (nvme0n4) 00:13:16.241 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.241 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.241 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.241 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.241 fio-3.35 00:13:16.241 Starting 4 threads 00:13:17.613 00:13:17.613 job0: (groupid=0, jobs=1): err= 0: pid=1185727: Mon Jul 15 10:30:05 2024 00:13:17.613 read: IOPS=1092, BW=4372KiB/s (4477kB/s)(4376KiB/1001msec) 00:13:17.613 slat (nsec): min=5109, max=67022, avg=14982.53, stdev=9024.36 00:13:17.613 clat (usec): min=181, max=42009, avg=567.06, stdev=3529.79 00:13:17.613 lat (usec): min=188, max=42028, avg=582.04, stdev=3530.44 00:13:17.613 clat percentiles (usec): 00:13:17.613 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 221], 00:13:17.613 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:13:17.613 | 70.00th=[ 273], 80.00th=[ 322], 90.00th=[ 371], 95.00th=[ 396], 00:13:17.613 | 99.00th=[ 429], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:17.613 | 99.99th=[42206] 00:13:17.614 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:17.614 slat (nsec): min=6096, max=68240, avg=14365.72, stdev=9407.78 00:13:17.614 clat (usec): min=122, max=522, avg=215.11, stdev=81.68 00:13:17.614 lat (usec): min=130, max=550, avg=229.47, stdev=87.17 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:13:17.614 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 182], 60.00th=[ 202], 00:13:17.614 | 70.00th=[ 223], 80.00th=[ 297], 90.00th=[ 371], 95.00th=[ 388], 00:13:17.614 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 523], 00:13:17.614 | 99.99th=[ 523] 00:13:17.614 bw ( KiB/s): min= 4096, max= 4096, per=22.31%, avg=4096.00, stdev= 0.00, samples=1 00:13:17.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:17.614 lat (usec) : 250=67.83%, 500=31.79%, 750=0.04% 00:13:17.614 lat (msec) : 2=0.04%, 50=0.30% 00:13:17.614 cpu : usr=2.50%, sys=3.60%, ctx=2630, majf=0, minf=1 00:13:17.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 issued rwts: total=1094,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.614 job1: (groupid=0, jobs=1): err= 0: pid=1185728: Mon Jul 15 10:30:05 2024 00:13:17.614 read: IOPS=23, BW=95.6KiB/s (97.9kB/s)(96.0KiB/1004msec) 00:13:17.614 slat (nsec): min=14936, max=34240, avg=19810.83, stdev=5510.67 00:13:17.614 clat (usec): min=293, max=42057, avg=36151.52, stdev=13837.89 00:13:17.614 lat (usec): min=311, max=42076, avg=36171.33, stdev=13838.34 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 293], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[41157], 00:13:17.614 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:17.614 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:17.614 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:17.614 | 99.99th=[42206] 00:13:17.614 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:13:17.614 slat (nsec): min=7262, max=53855, avg=16243.02, stdev=8558.85 00:13:17.614 clat (usec): min=157, max=563, avg=243.85, stdev=63.67 00:13:17.614 lat (usec): min=170, max=594, avg=260.10, stdev=63.81 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 192], 00:13:17.614 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 241], 00:13:17.614 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 388], 00:13:17.614 | 99.00th=[ 437], 99.50th=[ 486], 99.90th=[ 562], 99.95th=[ 562], 00:13:17.614 | 99.99th=[ 562] 00:13:17.614 bw ( KiB/s): min= 4096, max= 4096, per=22.31%, avg=4096.00, stdev= 0.00, samples=1 00:13:17.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:17.614 lat (usec) : 250=60.82%, 500=34.89%, 750=0.37% 00:13:17.614 lat (msec) : 50=3.92% 00:13:17.614 cpu : usr=0.70%, sys=1.00%, ctx=537, majf=0, minf=1 00:13:17.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.614 job2: (groupid=0, jobs=1): err= 0: pid=1185731: Mon Jul 15 10:30:05 2024 00:13:17.614 read: IOPS=1026, BW=4108KiB/s (4206kB/s)(4124KiB/1004msec) 00:13:17.614 slat (nsec): min=5773, max=80079, avg=16353.04, stdev=10231.15 00:13:17.614 clat (usec): min=175, max=42252, avg=644.17, stdev=3622.77 00:13:17.614 lat (usec): min=181, max=42264, avg=660.53, stdev=3623.16 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 202], 00:13:17.614 | 30.00th=[ 221], 40.00th=[ 241], 50.00th=[ 260], 60.00th=[ 359], 00:13:17.614 | 70.00th=[ 433], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 562], 00:13:17.614 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:17.614 | 99.99th=[42206] 00:13:17.614 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:13:17.614 slat (nsec): min=5937, max=52975, avg=14976.35, stdev=7014.91 00:13:17.614 clat (usec): min=128, max=445, avg=187.92, stdev=43.95 00:13:17.614 lat (usec): min=137, max=480, avg=202.90, stdev=46.62 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:13:17.614 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 186], 60.00th=[ 194], 00:13:17.614 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 255], 00:13:17.614 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 445], 99.95th=[ 445], 00:13:17.614 | 99.99th=[ 445] 00:13:17.614 bw ( KiB/s): min= 4096, max= 8192, per=33.47%, avg=6144.00, stdev=2896.31, samples=2 00:13:17.614 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:13:17.614 lat (usec) : 250=74.87%, 500=20.53%, 750=4.29% 00:13:17.614 lat (msec) : 50=0.31% 00:13:17.614 cpu : usr=2.29%, sys=4.79%, ctx=2567, majf=0, minf=1 00:13:17.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.614 job3: (groupid=0, jobs=1): err= 0: pid=1185732: Mon Jul 15 10:30:05 2024 00:13:17.614 read: IOPS=841, BW=3365KiB/s (3445kB/s)(3368KiB/1001msec) 00:13:17.614 slat (nsec): min=5402, max=68171, avg=13171.19, stdev=9216.80 00:13:17.614 clat (usec): min=188, max=42162, avg=897.13, stdev=5050.27 00:13:17.614 lat (usec): min=195, max=42176, avg=910.30, stdev=5051.25 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 210], 00:13:17.614 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 233], 00:13:17.614 | 70.00th=[ 247], 80.00th=[ 285], 90.00th=[ 461], 95.00th=[ 510], 00:13:17.614 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:17.614 | 99.99th=[42206] 00:13:17.614 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:17.614 slat (nsec): min=6702, max=54434, avg=14909.33, stdev=6446.24 00:13:17.614 clat (usec): min=141, max=458, avg=206.39, stdev=53.49 00:13:17.614 lat (usec): min=149, max=490, avg=221.30, stdev=54.24 00:13:17.614 clat percentiles (usec): 00:13:17.614 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:13:17.614 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 194], 60.00th=[ 215], 00:13:17.614 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 302], 00:13:17.614 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 457], 99.95th=[ 457], 00:13:17.614 | 99.99th=[ 457] 00:13:17.614 bw ( KiB/s): min= 4096, max= 4096, per=22.31%, avg=4096.00, stdev= 0.00, samples=1 00:13:17.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:17.614 lat (usec) : 250=79.74%, 500=17.42%, 750=2.14% 00:13:17.614 lat (msec) : 50=0.70% 00:13:17.614 cpu : usr=1.00%, sys=3.10%, ctx=1867, majf=0, minf=2 00:13:17.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.614 issued rwts: total=842,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.614 00:13:17.614 Run status group 0 (all jobs): 00:13:17.614 READ: bw=11.6MiB/s (12.2MB/s), 95.6KiB/s-4372KiB/s (97.9kB/s-4477kB/s), io=11.7MiB (12.3MB), run=1001-1004msec 00:13:17.614 WRITE: bw=17.9MiB/s (18.8MB/s), 2040KiB/s-6138KiB/s (2089kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1004msec 00:13:17.614 00:13:17.614 Disk stats (read/write): 00:13:17.614 nvme0n1: ios=878/1024, merge=0/0, ticks=563/236, in_queue=799, util=86.27% 00:13:17.614 nvme0n2: ios=40/512, merge=0/0, ticks=725/120, in_queue=845, util=86.69% 00:13:17.614 nvme0n3: ios=1027/1536, merge=0/0, ticks=481/281, in_queue=762, util=88.80% 00:13:17.614 nvme0n4: ios=569/622, merge=0/0, ticks=1612/145, in_queue=1757, util=97.78% 00:13:17.614 10:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:17.614 [global] 00:13:17.614 thread=1 00:13:17.614 invalidate=1 00:13:17.614 rw=write 00:13:17.614 time_based=1 00:13:17.614 runtime=1 00:13:17.614 ioengine=libaio 00:13:17.614 direct=1 00:13:17.614 bs=4096 00:13:17.614 iodepth=128 00:13:17.614 norandommap=0 00:13:17.614 numjobs=1 00:13:17.614 00:13:17.614 verify_dump=1 00:13:17.614 verify_backlog=512 00:13:17.614 verify_state_save=0 00:13:17.614 do_verify=1 00:13:17.614 verify=crc32c-intel 00:13:17.614 [job0] 00:13:17.614 filename=/dev/nvme0n1 00:13:17.614 [job1] 00:13:17.614 filename=/dev/nvme0n2 00:13:17.614 [job2] 00:13:17.614 filename=/dev/nvme0n3 00:13:17.614 [job3] 00:13:17.614 filename=/dev/nvme0n4 00:13:17.614 Could not set queue depth (nvme0n1) 00:13:17.614 Could not set queue depth (nvme0n2) 00:13:17.614 Could not set queue depth (nvme0n3) 00:13:17.614 Could not set queue depth (nvme0n4) 00:13:17.614 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.614 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.614 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.614 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.614 fio-3.35 00:13:17.614 Starting 4 threads 00:13:18.995 00:13:18.995 job0: (groupid=0, jobs=1): err= 0: pid=1185963: Mon Jul 15 10:30:07 2024 00:13:18.995 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:13:18.995 slat (usec): min=3, max=16776, avg=100.88, stdev=659.51 00:13:18.995 clat (usec): min=6459, max=49677, avg=13007.71, stdev=5594.58 00:13:18.995 lat (usec): min=6465, max=49691, avg=13108.59, stdev=5652.68 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:13:18.995 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[11731], 00:13:18.995 | 70.00th=[12649], 80.00th=[13566], 90.00th=[15270], 95.00th=[30016], 00:13:18.995 | 99.00th=[40633], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:18.995 | 99.99th=[49546] 00:13:18.995 write: IOPS=4881, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1006msec); 0 zone resets 00:13:18.995 slat (usec): min=3, max=15186, avg=100.56, stdev=574.67 00:13:18.995 clat (usec): min=5327, max=51107, avg=13797.90, stdev=6239.69 00:13:18.995 lat (usec): min=6331, max=51120, avg=13898.46, stdev=6283.39 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 7439], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10814], 00:13:18.995 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:13:18.995 | 70.00th=[12780], 80.00th=[13829], 90.00th=[21365], 95.00th=[27132], 00:13:18.995 | 99.00th=[39060], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:13:18.995 | 99.99th=[51119] 00:13:18.995 bw ( KiB/s): min=17336, max=20936, per=25.80%, avg=19136.00, stdev=2545.58, samples=2 00:13:18.995 iops : min= 4334, max= 5234, avg=4784.00, stdev=636.40, samples=2 00:13:18.995 lat (msec) : 10=9.38%, 20=81.42%, 50=9.19%, 100=0.01% 00:13:18.995 cpu : usr=5.87%, sys=7.86%, ctx=563, majf=0, minf=1 00:13:18.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.995 issued rwts: total=4608,4911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.995 job1: (groupid=0, jobs=1): err= 0: pid=1185964: Mon Jul 15 10:30:07 2024 00:13:18.995 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:13:18.995 slat (usec): min=2, max=12058, avg=108.39, stdev=709.69 00:13:18.995 clat (usec): min=4042, max=37949, avg=13159.45, stdev=4617.16 00:13:18.995 lat (usec): min=4050, max=37956, avg=13267.84, stdev=4655.69 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 5342], 5.00th=[ 8291], 10.00th=[ 9372], 20.00th=[10028], 00:13:18.995 | 30.00th=[10552], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:13:18.995 | 70.00th=[13829], 80.00th=[15926], 90.00th=[18744], 95.00th=[22414], 00:13:18.995 | 99.00th=[31851], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:13:18.995 | 99.99th=[38011] 00:13:18.995 write: IOPS=5029, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1005msec); 0 zone resets 00:13:18.995 slat (usec): min=3, max=10421, avg=88.60, stdev=442.95 00:13:18.995 clat (usec): min=494, max=49451, avg=13232.37, stdev=7012.78 00:13:18.995 lat (usec): min=509, max=49473, avg=13320.96, stdev=7059.11 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 3621], 5.00th=[ 5014], 10.00th=[ 6915], 20.00th=[ 9896], 00:13:18.995 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:13:18.995 | 70.00th=[12518], 80.00th=[13304], 90.00th=[22938], 95.00th=[32113], 00:13:18.995 | 99.00th=[35390], 99.50th=[44303], 99.90th=[49546], 99.95th=[49546], 00:13:18.995 | 99.99th=[49546] 00:13:18.995 bw ( KiB/s): min=17344, max=22080, per=26.58%, avg=19712.00, stdev=3348.86, samples=2 00:13:18.995 iops : min= 4336, max= 5520, avg=4928.00, stdev=837.21, samples=2 00:13:18.995 lat (usec) : 500=0.01%, 750=0.03% 00:13:18.995 lat (msec) : 4=0.78%, 10=19.01%, 20=69.99%, 50=10.18% 00:13:18.995 cpu : usr=4.88%, sys=7.87%, ctx=587, majf=0, minf=1 00:13:18.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.995 issued rwts: total=4608,5055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.995 job2: (groupid=0, jobs=1): err= 0: pid=1185965: Mon Jul 15 10:30:07 2024 00:13:18.995 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:13:18.995 slat (usec): min=2, max=12758, avg=112.96, stdev=781.18 00:13:18.995 clat (usec): min=4919, max=26359, avg=14273.41, stdev=3794.43 00:13:18.995 lat (usec): min=4933, max=26384, avg=14386.37, stdev=3836.63 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 6325], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11469], 00:13:18.995 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13435], 60.00th=[14091], 00:13:18.995 | 70.00th=[15795], 80.00th=[17171], 90.00th=[20317], 95.00th=[22414], 00:13:18.995 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26346], 99.95th=[26346], 00:13:18.995 | 99.99th=[26346] 00:13:18.995 write: IOPS=4797, BW=18.7MiB/s (19.7MB/s)(18.9MiB/1008msec); 0 zone resets 00:13:18.995 slat (usec): min=4, max=11432, avg=92.01, stdev=528.48 00:13:18.995 clat (usec): min=1294, max=26308, avg=12844.59, stdev=2764.10 00:13:18.995 lat (usec): min=1317, max=26319, avg=12936.60, stdev=2805.36 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 4621], 5.00th=[ 7111], 10.00th=[ 9241], 20.00th=[11994], 00:13:18.995 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:13:18.995 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14615], 95.00th=[15008], 00:13:18.995 | 99.00th=[22938], 99.50th=[23987], 99.90th=[25560], 99.95th=[25822], 00:13:18.995 | 99.99th=[26346] 00:13:18.995 bw ( KiB/s): min=17200, max=20464, per=25.39%, avg=18832.00, stdev=2308.00, samples=2 00:13:18.995 iops : min= 4300, max= 5116, avg=4708.00, stdev=577.00, samples=2 00:13:18.995 lat (msec) : 2=0.02%, 4=0.28%, 10=8.60%, 20=84.53%, 50=6.58% 00:13:18.995 cpu : usr=5.56%, sys=7.94%, ctx=555, majf=0, minf=1 00:13:18.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.995 issued rwts: total=4608,4836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.995 job3: (groupid=0, jobs=1): err= 0: pid=1185966: Mon Jul 15 10:30:07 2024 00:13:18.995 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:13:18.995 slat (usec): min=2, max=19986, avg=109.79, stdev=753.51 00:13:18.995 clat (usec): min=7605, max=68032, avg=17174.09, stdev=7035.17 00:13:18.995 lat (usec): min=7614, max=68041, avg=17283.88, stdev=7077.30 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[12387], 20.00th=[13173], 00:13:18.995 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13960], 60.00th=[15533], 00:13:18.995 | 70.00th=[17695], 80.00th=[20055], 90.00th=[27657], 95.00th=[30802], 00:13:18.995 | 99.00th=[40633], 99.50th=[42730], 99.90th=[67634], 99.95th=[67634], 00:13:18.995 | 99.99th=[67634] 00:13:18.995 write: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec); 0 zone resets 00:13:18.995 slat (usec): min=3, max=13454, avg=128.69, stdev=717.98 00:13:18.995 clat (usec): min=329, max=49516, avg=16904.78, stdev=8244.72 00:13:18.995 lat (usec): min=981, max=49532, avg=17033.47, stdev=8281.87 00:13:18.995 clat percentiles (usec): 00:13:18.995 | 1.00th=[ 2147], 5.00th=[ 8848], 10.00th=[11863], 20.00th=[12911], 00:13:18.995 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14222], 00:13:18.995 | 70.00th=[16712], 80.00th=[17433], 90.00th=[28967], 95.00th=[39060], 00:13:18.995 | 99.00th=[45876], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:13:18.995 | 99.99th=[49546] 00:13:18.995 bw ( KiB/s): min=13704, max=16384, per=20.28%, avg=15044.00, stdev=1895.05, samples=2 00:13:18.995 iops : min= 3426, max= 4096, avg=3761.00, stdev=473.76, samples=2 00:13:18.995 lat (usec) : 500=0.01%, 1000=0.04% 00:13:18.995 lat (msec) : 2=0.32%, 4=0.74%, 10=2.41%, 20=78.43%, 50=17.85% 00:13:18.995 lat (msec) : 100=0.20% 00:13:18.995 cpu : usr=2.19%, sys=4.99%, ctx=427, majf=0, minf=1 00:13:18.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.995 issued rwts: total=3584,3888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.995 00:13:18.996 Run status group 0 (all jobs): 00:13:18.996 READ: bw=67.5MiB/s (70.7MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1004-1008msec 00:13:18.996 WRITE: bw=72.4MiB/s (75.9MB/s), 15.1MiB/s-19.6MiB/s (15.9MB/s-20.6MB/s), io=73.0MiB (76.6MB), run=1004-1008msec 00:13:18.996 00:13:18.996 Disk stats (read/write): 00:13:18.996 nvme0n1: ios=3734/4096, merge=0/0, ticks=24396/27443, in_queue=51839, util=86.67% 00:13:18.996 nvme0n2: ios=4111/4215, merge=0/0, ticks=51174/51401, in_queue=102575, util=90.75% 00:13:18.996 nvme0n3: ios=3835/4096, merge=0/0, ticks=52237/51407, in_queue=103644, util=88.81% 00:13:18.996 nvme0n4: ios=3129/3278, merge=0/0, ticks=28943/27357, in_queue=56300, util=97.79% 00:13:18.996 10:30:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:18.996 [global] 00:13:18.996 thread=1 00:13:18.996 invalidate=1 00:13:18.996 rw=randwrite 00:13:18.996 time_based=1 00:13:18.996 runtime=1 00:13:18.996 ioengine=libaio 00:13:18.996 direct=1 00:13:18.996 bs=4096 00:13:18.996 iodepth=128 00:13:18.996 norandommap=0 00:13:18.996 numjobs=1 00:13:18.996 00:13:18.996 verify_dump=1 00:13:18.996 verify_backlog=512 00:13:18.996 verify_state_save=0 00:13:18.996 do_verify=1 00:13:18.996 verify=crc32c-intel 00:13:18.996 [job0] 00:13:18.996 filename=/dev/nvme0n1 00:13:18.996 [job1] 00:13:18.996 filename=/dev/nvme0n2 00:13:18.996 [job2] 00:13:18.996 filename=/dev/nvme0n3 00:13:18.996 [job3] 00:13:18.996 filename=/dev/nvme0n4 00:13:18.996 Could not set queue depth (nvme0n1) 00:13:18.996 Could not set queue depth (nvme0n2) 00:13:18.996 Could not set queue depth (nvme0n3) 00:13:18.996 Could not set queue depth (nvme0n4) 00:13:19.254 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:19.254 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:19.254 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:19.254 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:19.254 fio-3.35 00:13:19.254 Starting 4 threads 00:13:20.628 00:13:20.628 job0: (groupid=0, jobs=1): err= 0: pid=1186395: Mon Jul 15 10:30:08 2024 00:13:20.628 read: IOPS=3452, BW=13.5MiB/s (14.1MB/s)(14.1MiB/1047msec) 00:13:20.628 slat (usec): min=3, max=24030, avg=132.90, stdev=852.90 00:13:20.628 clat (usec): min=6497, max=54170, avg=17939.36, stdev=8651.71 00:13:20.628 lat (usec): min=6505, max=54178, avg=18072.26, stdev=8719.71 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 7439], 5.00th=[10159], 10.00th=[10290], 20.00th=[11469], 00:13:20.628 | 30.00th=[12387], 40.00th=[13698], 50.00th=[15008], 60.00th=[16581], 00:13:20.628 | 70.00th=[18220], 80.00th=[23462], 90.00th=[32375], 95.00th=[38011], 00:13:20.628 | 99.00th=[43779], 99.50th=[48497], 99.90th=[54264], 99.95th=[54264], 00:13:20.628 | 99.99th=[54264] 00:13:20.628 write: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1047msec); 0 zone resets 00:13:20.628 slat (usec): min=4, max=14098, avg=118.80, stdev=848.36 00:13:20.628 clat (usec): min=5076, max=66954, avg=16546.92, stdev=8960.99 00:13:20.628 lat (usec): min=5088, max=66962, avg=16665.72, stdev=9004.52 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 6849], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11207], 00:13:20.628 | 30.00th=[11731], 40.00th=[12649], 50.00th=[15533], 60.00th=[16450], 00:13:20.628 | 70.00th=[18220], 80.00th=[19006], 90.00th=[20317], 95.00th=[30802], 00:13:20.628 | 99.00th=[63177], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:13:20.628 | 99.99th=[66847] 00:13:20.628 bw ( KiB/s): min=15872, max=16128, per=24.01%, avg=16000.00, stdev=181.02, samples=2 00:13:20.628 iops : min= 3968, max= 4032, avg=4000.00, stdev=45.25, samples=2 00:13:20.628 lat (msec) : 10=5.87%, 20=76.33%, 50=16.46%, 100=1.34% 00:13:20.628 cpu : usr=3.63%, sys=6.69%, ctx=282, majf=0, minf=1 00:13:20.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:20.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.628 issued rwts: total=3615,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.628 job1: (groupid=0, jobs=1): err= 0: pid=1186416: Mon Jul 15 10:30:08 2024 00:13:20.628 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:13:20.628 slat (usec): min=2, max=19291, avg=81.63, stdev=696.59 00:13:20.628 clat (usec): min=3278, max=25788, avg=12195.00, stdev=3954.49 00:13:20.628 lat (usec): min=3285, max=35959, avg=12276.64, stdev=4006.88 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 4752], 5.00th=[ 5932], 10.00th=[ 7832], 20.00th=[ 9896], 00:13:20.628 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11600], 60.00th=[11994], 00:13:20.628 | 70.00th=[12649], 80.00th=[15270], 90.00th=[17171], 95.00th=[19006], 00:13:20.628 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25560], 00:13:20.628 | 99.99th=[25822] 00:13:20.628 write: IOPS=5899, BW=23.0MiB/s (24.2MB/s)(23.3MiB/1010msec); 0 zone resets 00:13:20.628 slat (usec): min=3, max=11046, avg=67.36, stdev=601.28 00:13:20.628 clat (usec): min=324, max=88389, avg=10888.08, stdev=7386.69 00:13:20.628 lat (usec): min=342, max=88397, avg=10955.44, stdev=7418.63 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 1037], 5.00th=[ 3425], 10.00th=[ 5014], 20.00th=[ 7373], 00:13:20.628 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11207], 00:13:20.628 | 70.00th=[11994], 80.00th=[13042], 90.00th=[14222], 95.00th=[17957], 00:13:20.628 | 99.00th=[51643], 99.50th=[72877], 99.90th=[87557], 99.95th=[87557], 00:13:20.628 | 99.99th=[88605] 00:13:20.628 bw ( KiB/s): min=22152, max=24496, per=35.00%, avg=23324.00, stdev=1657.46, samples=2 00:13:20.628 iops : min= 5538, max= 6124, avg=5831.00, stdev=414.36, samples=2 00:13:20.628 lat (usec) : 500=0.02%, 750=0.21%, 1000=0.30% 00:13:20.628 lat (msec) : 2=0.49%, 4=3.35%, 10=28.16%, 20=63.91%, 50=3.01% 00:13:20.628 lat (msec) : 100=0.56% 00:13:20.628 cpu : usr=4.06%, sys=6.84%, ctx=355, majf=0, minf=1 00:13:20.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:20.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.628 issued rwts: total=5120,5958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.628 job2: (groupid=0, jobs=1): err= 0: pid=1186450: Mon Jul 15 10:30:08 2024 00:13:20.628 read: IOPS=2641, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1005msec) 00:13:20.628 slat (usec): min=2, max=29843, avg=192.67, stdev=1463.83 00:13:20.628 clat (usec): min=571, max=89400, avg=24447.92, stdev=16269.34 00:13:20.628 lat (usec): min=5296, max=90345, avg=24640.60, stdev=16410.22 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 5538], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[12256], 00:13:20.628 | 30.00th=[13042], 40.00th=[13960], 50.00th=[17433], 60.00th=[21365], 00:13:20.628 | 70.00th=[29754], 80.00th=[36439], 90.00th=[49546], 95.00th=[56886], 00:13:20.628 | 99.00th=[78119], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:13:20.628 | 99.99th=[89654] 00:13:20.628 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:13:20.628 slat (usec): min=3, max=16177, avg=154.44, stdev=927.70 00:13:20.628 clat (usec): min=3206, max=75072, avg=20252.88, stdev=12683.63 00:13:20.628 lat (usec): min=3214, max=78285, avg=20407.32, stdev=12789.12 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 3261], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[12518], 00:13:20.628 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14353], 60.00th=[17171], 00:13:20.628 | 70.00th=[20841], 80.00th=[27657], 90.00th=[38011], 95.00th=[49546], 00:13:20.628 | 99.00th=[67634], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:13:20.628 | 99.99th=[74974] 00:13:20.628 bw ( KiB/s): min=12024, max=12288, per=18.24%, avg=12156.00, stdev=186.68, samples=2 00:13:20.628 iops : min= 3006, max= 3072, avg=3039.00, stdev=46.67, samples=2 00:13:20.628 lat (usec) : 750=0.02% 00:13:20.628 lat (msec) : 4=0.56%, 10=5.08%, 20=56.71%, 50=30.92%, 100=6.71% 00:13:20.628 cpu : usr=1.99%, sys=3.39%, ctx=221, majf=0, minf=1 00:13:20.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:20.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.628 issued rwts: total=2655,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.628 job3: (groupid=0, jobs=1): err= 0: pid=1186463: Mon Jul 15 10:30:08 2024 00:13:20.628 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:20.628 slat (usec): min=2, max=32728, avg=126.25, stdev=1140.16 00:13:20.628 clat (usec): min=1724, max=85426, avg=16800.22, stdev=12831.00 00:13:20.628 lat (usec): min=1733, max=85485, avg=16926.47, stdev=12917.02 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 3326], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[10290], 00:13:20.628 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:13:20.628 | 70.00th=[13698], 80.00th=[18744], 90.00th=[37487], 95.00th=[49021], 00:13:20.628 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[71828], 00:13:20.628 | 99.99th=[85459] 00:13:20.628 write: IOPS=4299, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1004msec); 0 zone resets 00:13:20.628 slat (usec): min=3, max=10542, avg=93.09, stdev=598.34 00:13:20.628 clat (usec): min=403, max=89843, avg=13373.88, stdev=8150.28 00:13:20.628 lat (usec): min=1588, max=89851, avg=13466.97, stdev=8189.71 00:13:20.628 clat percentiles (usec): 00:13:20.628 | 1.00th=[ 3359], 5.00th=[ 5604], 10.00th=[ 7242], 20.00th=[ 9634], 00:13:20.628 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:13:20.628 | 70.00th=[13042], 80.00th=[13829], 90.00th=[16319], 95.00th=[27657], 00:13:20.628 | 99.00th=[51119], 99.50th=[52167], 99.90th=[56361], 99.95th=[57410], 00:13:20.628 | 99.99th=[89654] 00:13:20.628 bw ( KiB/s): min=13944, max=19568, per=25.14%, avg=16756.00, stdev=3976.77, samples=2 00:13:20.628 iops : min= 3486, max= 4892, avg=4189.00, stdev=994.19, samples=2 00:13:20.628 lat (usec) : 500=0.01%, 750=0.01% 00:13:20.628 lat (msec) : 2=0.33%, 4=1.30%, 10=18.57%, 20=66.56%, 50=10.38% 00:13:20.628 lat (msec) : 100=2.84% 00:13:20.628 cpu : usr=2.59%, sys=5.98%, ctx=406, majf=0, minf=1 00:13:20.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.628 issued rwts: total=4096,4317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.628 00:13:20.628 Run status group 0 (all jobs): 00:13:20.628 READ: bw=57.8MiB/s (60.6MB/s), 10.3MiB/s-19.8MiB/s (10.8MB/s-20.8MB/s), io=60.5MiB (63.4MB), run=1004-1047msec 00:13:20.628 WRITE: bw=65.1MiB/s (68.2MB/s), 11.9MiB/s-23.0MiB/s (12.5MB/s-24.2MB/s), io=68.1MiB (71.4MB), run=1004-1047msec 00:13:20.628 00:13:20.628 Disk stats (read/write): 00:13:20.628 nvme0n1: ios=2925/3072, merge=0/0, ticks=27372/23560, in_queue=50932, util=86.17% 00:13:20.628 nvme0n2: ios=4272/5120, merge=0/0, ticks=45120/44589, in_queue=89709, util=97.46% 00:13:20.628 nvme0n3: ios=2091/2239, merge=0/0, ticks=20150/19071, in_queue=39221, util=96.23% 00:13:20.628 nvme0n4: ios=3690/4078, merge=0/0, ticks=35124/39149, in_queue=74273, util=95.78% 00:13:20.628 10:30:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:20.628 10:30:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1186867 00:13:20.628 10:30:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:20.628 10:30:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:20.628 [global] 00:13:20.628 thread=1 00:13:20.628 invalidate=1 00:13:20.628 rw=read 00:13:20.628 time_based=1 00:13:20.628 runtime=10 00:13:20.628 ioengine=libaio 00:13:20.628 direct=1 00:13:20.628 bs=4096 00:13:20.628 iodepth=1 00:13:20.628 norandommap=1 00:13:20.628 numjobs=1 00:13:20.628 00:13:20.628 [job0] 00:13:20.628 filename=/dev/nvme0n1 00:13:20.628 [job1] 00:13:20.628 filename=/dev/nvme0n2 00:13:20.628 [job2] 00:13:20.628 filename=/dev/nvme0n3 00:13:20.628 [job3] 00:13:20.628 filename=/dev/nvme0n4 00:13:20.628 Could not set queue depth (nvme0n1) 00:13:20.629 Could not set queue depth (nvme0n2) 00:13:20.629 Could not set queue depth (nvme0n3) 00:13:20.629 Could not set queue depth (nvme0n4) 00:13:20.629 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.629 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.629 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.629 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.629 fio-3.35 00:13:20.629 Starting 4 threads 00:13:23.899 10:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:23.899 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:23.899 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=495616, buflen=4096 00:13:23.899 fio: pid=1187040, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:23.899 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:23.899 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:23.899 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5722112, buflen=4096 00:13:23.899 fio: pid=1187032, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:24.156 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=66031616, buflen=4096 00:13:24.156 fio: pid=1187020, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:24.156 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.156 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:24.413 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.413 10:30:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:24.413 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=364544, buflen=4096 00:13:24.413 fio: pid=1187024, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:24.413 00:13:24.413 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1187020: Mon Jul 15 10:30:12 2024 00:13:24.413 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(63.0MiB/3422msec) 00:13:24.413 slat (usec): min=4, max=12766, avg=10.55, stdev=143.62 00:13:24.413 clat (usec): min=160, max=11719, avg=198.28, stdev=118.56 00:13:24.413 lat (usec): min=164, max=13061, avg=208.83, stdev=187.89 00:13:24.413 clat percentiles (usec): 00:13:24.413 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:13:24.413 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:13:24.413 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 243], 00:13:24.413 | 99.00th=[ 281], 99.50th=[ 318], 99.90th=[ 506], 99.95th=[ 594], 00:13:24.413 | 99.99th=[ 7373] 00:13:24.413 bw ( KiB/s): min=16640, max=21016, per=100.00%, avg=19302.67, stdev=1447.41, samples=6 00:13:24.413 iops : min= 4160, max= 5254, avg=4825.67, stdev=361.85, samples=6 00:13:24.413 lat (usec) : 250=96.27%, 500=3.61%, 750=0.08%, 1000=0.01% 00:13:24.413 lat (msec) : 4=0.01%, 10=0.01%, 20=0.01% 00:13:24.413 cpu : usr=1.78%, sys=5.09%, ctx=16128, majf=0, minf=1 00:13:24.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.413 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.413 issued rwts: total=16122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.413 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1187024: Mon Jul 15 10:30:12 2024 00:13:24.413 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(356KiB/3689msec) 00:13:24.413 slat (nsec): min=12278, max=80993, avg=19210.73, stdev=10204.50 00:13:24.413 clat (usec): min=387, max=42086, avg=41169.35, stdev=4397.58 00:13:24.413 lat (usec): min=411, max=42099, avg=41187.87, stdev=4397.32 00:13:24.413 clat percentiles (usec): 00:13:24.413 | 1.00th=[ 388], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:24.413 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:13:24.413 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:24.413 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:24.413 | 99.99th=[42206] 00:13:24.413 bw ( KiB/s): min= 88, max= 104, per=0.50%, avg=96.57, stdev= 6.70, samples=7 00:13:24.413 iops : min= 22, max= 26, avg=24.14, stdev= 1.68, samples=7 00:13:24.413 lat (usec) : 500=1.11% 00:13:24.413 lat (msec) : 50=97.78% 00:13:24.414 cpu : usr=0.05%, sys=0.00%, ctx=93, majf=0, minf=1 00:13:24.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.414 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.414 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.414 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1187032: Mon Jul 15 10:30:12 2024 00:13:24.414 read: IOPS=438, BW=1752KiB/s (1794kB/s)(5588KiB/3190msec) 00:13:24.414 slat (nsec): min=4527, max=46102, avg=12624.60, stdev=6306.36 00:13:24.414 clat (usec): min=199, max=42348, avg=2251.52, stdev=8838.14 00:13:24.414 lat (usec): min=219, max=42367, avg=2264.14, stdev=8839.43 00:13:24.414 clat percentiles (usec): 00:13:24.414 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:13:24.414 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 277], 60.00th=[ 293], 00:13:24.414 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 379], 95.00th=[ 502], 00:13:24.414 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:24.414 | 99.99th=[42206] 00:13:24.414 bw ( KiB/s): min= 96, max= 7072, per=9.66%, avg=1857.33, stdev=2695.07, samples=6 00:13:24.414 iops : min= 24, max= 1768, avg=464.33, stdev=673.77, samples=6 00:13:24.414 lat (usec) : 250=32.33%, 500=62.59%, 750=0.21%, 1000=0.07% 00:13:24.414 lat (msec) : 50=4.72% 00:13:24.414 cpu : usr=0.22%, sys=0.66%, ctx=1399, majf=0, minf=1 00:13:24.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.414 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.414 issued rwts: total=1398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.414 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1187040: Mon Jul 15 10:30:12 2024 00:13:24.414 read: IOPS=42, BW=167KiB/s (171kB/s)(484KiB/2902msec) 00:13:24.414 slat (nsec): min=5544, max=34167, avg=15059.55, stdev=9036.80 00:13:24.414 clat (usec): min=225, max=42013, avg=23881.80, stdev=20579.80 00:13:24.414 lat (usec): min=239, max=42032, avg=23896.85, stdev=20585.47 00:13:24.414 clat percentiles (usec): 00:13:24.414 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 269], 00:13:24.414 | 30.00th=[ 277], 40.00th=[ 318], 50.00th=[41157], 60.00th=[41681], 00:13:24.414 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:24.414 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:24.414 | 99.99th=[42206] 00:13:24.414 bw ( KiB/s): min= 96, max= 496, per=0.92%, avg=177.60, stdev=178.02, samples=5 00:13:24.414 iops : min= 24, max= 124, avg=44.40, stdev=44.51, samples=5 00:13:24.414 lat (usec) : 250=4.92%, 500=36.89%, 750=0.82% 00:13:24.414 lat (msec) : 50=56.56% 00:13:24.414 cpu : usr=0.00%, sys=0.10%, ctx=123, majf=0, minf=1 00:13:24.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.414 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.414 issued rwts: total=122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.414 00:13:24.414 Run status group 0 (all jobs): 00:13:24.414 READ: bw=18.8MiB/s (19.7MB/s), 96.5KiB/s-18.4MiB/s (98.8kB/s-19.3MB/s), io=69.2MiB (72.6MB), run=2902-3689msec 00:13:24.414 00:13:24.414 Disk stats (read/write): 00:13:24.414 nvme0n1: ios=15924/0, merge=0/0, ticks=4028/0, in_queue=4028, util=99.14% 00:13:24.414 nvme0n2: ios=87/0, merge=0/0, ticks=3583/0, in_queue=3583, util=96.46% 00:13:24.414 nvme0n3: ios=1395/0, merge=0/0, ticks=3057/0, in_queue=3057, util=96.75% 00:13:24.414 nvme0n4: ios=119/0, merge=0/0, ticks=2807/0, in_queue=2807, util=96.71% 00:13:24.671 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.671 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:24.928 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.928 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:25.185 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:25.185 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:25.442 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:25.442 10:30:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:25.700 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:25.700 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1186867 00:13:25.700 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:25.700 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:25.957 nvmf hotplug test: fio failed as expected 00:13:25.957 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.216 rmmod nvme_tcp 00:13:26.216 rmmod nvme_fabrics 00:13:26.216 rmmod nvme_keyring 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1184310 ']' 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1184310 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1184310 ']' 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1184310 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184310 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184310' 00:13:26.216 killing process with pid 1184310 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1184310 00:13:26.216 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1184310 00:13:26.474 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.475 10:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.380 10:30:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.380 00:13:28.380 real 0m23.704s 00:13:28.380 user 1m23.081s 00:13:28.380 sys 0m6.505s 00:13:28.380 10:30:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.380 10:30:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.380 ************************************ 00:13:28.380 END TEST nvmf_fio_target 00:13:28.380 ************************************ 00:13:28.639 10:30:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.639 10:30:16 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:28.639 10:30:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.639 10:30:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.639 10:30:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.639 ************************************ 00:13:28.639 START TEST nvmf_bdevio 00:13:28.639 ************************************ 00:13:28.639 10:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:28.639 * Looking for test storage... 00:13:28.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.639 10:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.622 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:30.623 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:30.623 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:30.623 Found net devices under 0000:09:00.0: cvl_0_0 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:30.623 Found net devices under 0000:09:00.1: cvl_0_1 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.623 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:13:30.624 00:13:30.624 --- 10.0.0.2 ping statistics --- 00:13:30.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.624 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:13:30.624 00:13:30.624 --- 10.0.0.1 ping statistics --- 00:13:30.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.624 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.624 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1189673 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1189673 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1189673 ']' 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.882 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.882 [2024-07-15 10:30:19.235176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:30.882 [2024-07-15 10:30:19.235269] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.882 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.882 [2024-07-15 10:30:19.297971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.882 [2024-07-15 10:30:19.405735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.882 [2024-07-15 10:30:19.405799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.882 [2024-07-15 10:30:19.405832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.882 [2024-07-15 10:30:19.405844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.882 [2024-07-15 10:30:19.405870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.882 [2024-07-15 10:30:19.405961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:30.882 [2024-07-15 10:30:19.406025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:30.882 [2024-07-15 10:30:19.406072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:30.882 [2024-07-15 10:30:19.406075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 [2024-07-15 10:30:19.576682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 Malloc0 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 [2024-07-15 10:30:19.629210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:31.139 { 00:13:31.139 "params": { 00:13:31.139 "name": "Nvme$subsystem", 00:13:31.139 "trtype": "$TEST_TRANSPORT", 00:13:31.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:31.139 "adrfam": "ipv4", 00:13:31.139 "trsvcid": "$NVMF_PORT", 00:13:31.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:31.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:31.139 "hdgst": ${hdgst:-false}, 00:13:31.139 "ddgst": ${ddgst:-false} 00:13:31.139 }, 00:13:31.139 "method": "bdev_nvme_attach_controller" 00:13:31.139 } 00:13:31.139 EOF 00:13:31.139 )") 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:31.139 10:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:31.139 "params": { 00:13:31.139 "name": "Nvme1", 00:13:31.139 "trtype": "tcp", 00:13:31.139 "traddr": "10.0.0.2", 00:13:31.139 "adrfam": "ipv4", 00:13:31.139 "trsvcid": "4420", 00:13:31.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:31.139 "hdgst": false, 00:13:31.139 "ddgst": false 00:13:31.139 }, 00:13:31.139 "method": "bdev_nvme_attach_controller" 00:13:31.139 }' 00:13:31.139 [2024-07-15 10:30:19.676318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:31.139 [2024-07-15 10:30:19.676395] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189697 ] 00:13:31.396 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.396 [2024-07-15 10:30:19.737552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.396 [2024-07-15 10:30:19.851525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.396 [2024-07-15 10:30:19.851593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.396 [2024-07-15 10:30:19.851597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.653 I/O targets: 00:13:31.653 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:31.653 00:13:31.653 00:13:31.653 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.653 http://cunit.sourceforge.net/ 00:13:31.653 00:13:31.653 00:13:31.653 Suite: bdevio tests on: Nvme1n1 00:13:31.910 Test: blockdev write read block ...passed 00:13:31.910 Test: blockdev write zeroes read block ...passed 00:13:31.910 Test: blockdev write zeroes read no split ...passed 00:13:31.910 Test: blockdev write zeroes read split ...passed 00:13:31.910 Test: blockdev write zeroes read split partial ...passed 00:13:31.910 Test: blockdev reset ...[2024-07-15 10:30:20.347358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:31.910 [2024-07-15 10:30:20.347463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183f580 (9): Bad file descriptor 00:13:31.910 [2024-07-15 10:30:20.441712] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:31.910 passed 00:13:32.168 Test: blockdev write read 8 blocks ...passed 00:13:32.168 Test: blockdev write read size > 128k ...passed 00:13:32.168 Test: blockdev write read invalid size ...passed 00:13:32.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.168 Test: blockdev write read max offset ...passed 00:13:32.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.168 Test: blockdev writev readv 8 blocks ...passed 00:13:32.168 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.168 Test: blockdev writev readv block ...passed 00:13:32.168 Test: blockdev writev readv size > 128k ...passed 00:13:32.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.168 Test: blockdev comparev and writev ...[2024-07-15 10:30:20.693816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.693853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.693878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.693896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.694232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.694263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.694286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.694302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.694660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.694685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.694707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.694724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.695090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.695114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:32.168 [2024-07-15 10:30:20.695136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.168 [2024-07-15 10:30:20.695152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:32.426 passed 00:13:32.426 Test: blockdev nvme passthru rw ...passed 00:13:32.426 Test: blockdev nvme passthru vendor specific ...[2024-07-15 10:30:20.777071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.426 [2024-07-15 10:30:20.777099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:32.426 [2024-07-15 10:30:20.777240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.426 [2024-07-15 10:30:20.777264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:32.426 [2024-07-15 10:30:20.777397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.426 [2024-07-15 10:30:20.777420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:32.426 [2024-07-15 10:30:20.777556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.426 [2024-07-15 10:30:20.777580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:32.426 passed 00:13:32.426 Test: blockdev nvme admin passthru ...passed 00:13:32.426 Test: blockdev copy ...passed 00:13:32.426 00:13:32.426 Run Summary: Type Total Ran Passed Failed Inactive 00:13:32.426 suites 1 1 n/a 0 0 00:13:32.426 tests 23 23 23 0 0 00:13:32.426 asserts 152 152 152 0 n/a 00:13:32.426 00:13:32.426 Elapsed time = 1.271 seconds 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.684 rmmod nvme_tcp 00:13:32.684 rmmod nvme_fabrics 00:13:32.684 rmmod nvme_keyring 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1189673 ']' 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1189673 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1189673 ']' 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1189673 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189673 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189673' 00:13:32.684 killing process with pid 1189673 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1189673 00:13:32.684 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1189673 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.944 10:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.484 10:30:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.484 00:13:35.484 real 0m6.489s 00:13:35.484 user 0m11.295s 00:13:35.484 sys 0m2.056s 00:13:35.484 10:30:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.484 10:30:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.484 ************************************ 00:13:35.484 END TEST nvmf_bdevio 00:13:35.484 ************************************ 00:13:35.484 10:30:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:35.484 10:30:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:35.484 10:30:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.485 10:30:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.485 10:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.485 ************************************ 00:13:35.485 START TEST nvmf_auth_target 00:13:35.485 ************************************ 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:35.485 * Looking for test storage... 00:13:35.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.485 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.392 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:37.393 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:37.393 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:37.393 Found net devices under 0000:09:00.0: cvl_0_0 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:37.393 Found net devices under 0000:09:00.1: cvl_0_1 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:13:37.393 00:13:37.393 --- 10.0.0.2 ping statistics --- 00:13:37.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.393 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:13:37.393 00:13:37.393 --- 10.0.0.1 ping statistics --- 00:13:37.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.393 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1191787 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1191787 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1191787 ']' 00:13:37.393 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.394 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.394 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.394 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.394 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1191919 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=178551dfe15c36ff373e398cbf17ed6c70d56da55b149630 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CgU 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 178551dfe15c36ff373e398cbf17ed6c70d56da55b149630 0 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 178551dfe15c36ff373e398cbf17ed6c70d56da55b149630 0 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=178551dfe15c36ff373e398cbf17ed6c70d56da55b149630 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CgU 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CgU 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.CgU 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=efd54594e0e0614470356b62fedea2c511b9b843e00d2415749cd2e755e1ee76 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.V03 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key efd54594e0e0614470356b62fedea2c511b9b843e00d2415749cd2e755e1ee76 3 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 efd54594e0e0614470356b62fedea2c511b9b843e00d2415749cd2e755e1ee76 3 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=efd54594e0e0614470356b62fedea2c511b9b843e00d2415749cd2e755e1ee76 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.V03 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.V03 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.V03 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55e10ad7c16744594c80d819cb1948d5 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6xW 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55e10ad7c16744594c80d819cb1948d5 1 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55e10ad7c16744594c80d819cb1948d5 1 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55e10ad7c16744594c80d819cb1948d5 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:37.652 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6xW 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6xW 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.6xW 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf91589da283841abd74f3941d131f3b0539675d501dcbca 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NdH 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf91589da283841abd74f3941d131f3b0539675d501dcbca 2 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf91589da283841abd74f3941d131f3b0539675d501dcbca 2 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf91589da283841abd74f3941d131f3b0539675d501dcbca 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NdH 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NdH 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.NdH 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=095f5c3176d7959f491fcf32d381f6fa4af6fc0ec440169a 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4bd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 095f5c3176d7959f491fcf32d381f6fa4af6fc0ec440169a 2 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 095f5c3176d7959f491fcf32d381f6fa4af6fc0ec440169a 2 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=095f5c3176d7959f491fcf32d381f6fa4af6fc0ec440169a 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4bd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4bd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.4bd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8d42152a13d95747d15e13671cb98480 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7yd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8d42152a13d95747d15e13671cb98480 1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8d42152a13d95747d15e13671cb98480 1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8d42152a13d95747d15e13671cb98480 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7yd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7yd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.7yd 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba0dd980e6cb7afc8f55b01ef8875a6fa685eb61616e398b0af5d2edb236bb57 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Fgq 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba0dd980e6cb7afc8f55b01ef8875a6fa685eb61616e398b0af5d2edb236bb57 3 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba0dd980e6cb7afc8f55b01ef8875a6fa685eb61616e398b0af5d2edb236bb57 3 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba0dd980e6cb7afc8f55b01ef8875a6fa685eb61616e398b0af5d2edb236bb57 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Fgq 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Fgq 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Fgq 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1191787 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1191787 ']' 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.911 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1191919 /var/tmp/host.sock 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1191919 ']' 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:38.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.170 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CgU 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CgU 00:13:38.428 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CgU 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.V03 ]] 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.V03 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.V03 00:13:38.686 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.V03 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6xW 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6xW 00:13:38.943 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6xW 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.NdH ]] 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NdH 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NdH 00:13:39.201 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NdH 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4bd 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4bd 00:13:39.459 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4bd 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.7yd ]] 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7yd 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7yd 00:13:39.716 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7yd 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Fgq 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Fgq 00:13:39.974 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Fgq 00:13:40.232 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:40.232 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:40.232 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.232 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.232 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:40.232 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.489 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.747 00:13:40.747 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.747 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.747 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.005 { 00:13:41.005 "cntlid": 1, 00:13:41.005 "qid": 0, 00:13:41.005 "state": "enabled", 00:13:41.005 "thread": "nvmf_tgt_poll_group_000", 00:13:41.005 "listen_address": { 00:13:41.005 "trtype": "TCP", 00:13:41.005 "adrfam": "IPv4", 00:13:41.005 "traddr": "10.0.0.2", 00:13:41.005 "trsvcid": "4420" 00:13:41.005 }, 00:13:41.005 "peer_address": { 00:13:41.005 "trtype": "TCP", 00:13:41.005 "adrfam": "IPv4", 00:13:41.005 "traddr": "10.0.0.1", 00:13:41.005 "trsvcid": "46966" 00:13:41.005 }, 00:13:41.005 "auth": { 00:13:41.005 "state": "completed", 00:13:41.005 "digest": "sha256", 00:13:41.005 "dhgroup": "null" 00:13:41.005 } 00:13:41.005 } 00:13:41.005 ]' 00:13:41.005 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.263 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.520 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:42.452 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.709 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.966 00:13:42.966 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.966 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.966 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.222 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.222 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.222 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.222 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.222 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.222 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.222 { 00:13:43.222 "cntlid": 3, 00:13:43.222 "qid": 0, 00:13:43.223 "state": "enabled", 00:13:43.223 "thread": "nvmf_tgt_poll_group_000", 00:13:43.223 "listen_address": { 00:13:43.223 "trtype": "TCP", 00:13:43.223 "adrfam": "IPv4", 00:13:43.223 "traddr": "10.0.0.2", 00:13:43.223 "trsvcid": "4420" 00:13:43.223 }, 00:13:43.223 "peer_address": { 00:13:43.223 "trtype": "TCP", 00:13:43.223 "adrfam": "IPv4", 00:13:43.223 "traddr": "10.0.0.1", 00:13:43.223 "trsvcid": "46988" 00:13:43.223 }, 00:13:43.223 "auth": { 00:13:43.223 "state": "completed", 00:13:43.223 "digest": "sha256", 00:13:43.223 "dhgroup": "null" 00:13:43.223 } 00:13:43.223 } 00:13:43.223 ]' 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.223 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.480 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.413 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.670 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.236 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.236 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.493 { 00:13:45.493 "cntlid": 5, 00:13:45.493 "qid": 0, 00:13:45.493 "state": "enabled", 00:13:45.493 "thread": "nvmf_tgt_poll_group_000", 00:13:45.493 "listen_address": { 00:13:45.493 "trtype": "TCP", 00:13:45.493 "adrfam": "IPv4", 00:13:45.493 "traddr": "10.0.0.2", 00:13:45.493 "trsvcid": "4420" 00:13:45.493 }, 00:13:45.493 "peer_address": { 00:13:45.493 "trtype": "TCP", 00:13:45.493 "adrfam": "IPv4", 00:13:45.493 "traddr": "10.0.0.1", 00:13:45.493 "trsvcid": "47014" 00:13:45.493 }, 00:13:45.493 "auth": { 00:13:45.493 "state": "completed", 00:13:45.493 "digest": "sha256", 00:13:45.493 "dhgroup": "null" 00:13:45.493 } 00:13:45.493 } 00:13:45.493 ]' 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.493 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.749 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.681 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.939 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:47.197 00:13:47.197 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.197 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.197 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.455 { 00:13:47.455 "cntlid": 7, 00:13:47.455 "qid": 0, 00:13:47.455 "state": "enabled", 00:13:47.455 "thread": "nvmf_tgt_poll_group_000", 00:13:47.455 "listen_address": { 00:13:47.455 "trtype": "TCP", 00:13:47.455 "adrfam": "IPv4", 00:13:47.455 "traddr": "10.0.0.2", 00:13:47.455 "trsvcid": "4420" 00:13:47.455 }, 00:13:47.455 "peer_address": { 00:13:47.455 "trtype": "TCP", 00:13:47.455 "adrfam": "IPv4", 00:13:47.455 "traddr": "10.0.0.1", 00:13:47.455 "trsvcid": "47038" 00:13:47.455 }, 00:13:47.455 "auth": { 00:13:47.455 "state": "completed", 00:13:47.455 "digest": "sha256", 00:13:47.455 "dhgroup": "null" 00:13:47.455 } 00:13:47.455 } 00:13:47.455 ]' 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:47.455 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.712 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.712 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.712 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.969 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.900 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.464 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.464 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.464 { 00:13:49.464 "cntlid": 9, 00:13:49.464 "qid": 0, 00:13:49.464 "state": "enabled", 00:13:49.464 "thread": "nvmf_tgt_poll_group_000", 00:13:49.464 "listen_address": { 00:13:49.464 "trtype": "TCP", 00:13:49.464 "adrfam": "IPv4", 00:13:49.465 "traddr": "10.0.0.2", 00:13:49.465 "trsvcid": "4420" 00:13:49.465 }, 00:13:49.465 "peer_address": { 00:13:49.465 "trtype": "TCP", 00:13:49.465 "adrfam": "IPv4", 00:13:49.465 "traddr": "10.0.0.1", 00:13:49.465 "trsvcid": "57876" 00:13:49.465 }, 00:13:49.465 "auth": { 00:13:49.465 "state": "completed", 00:13:49.465 "digest": "sha256", 00:13:49.465 "dhgroup": "ffdhe2048" 00:13:49.465 } 00:13:49.465 } 00:13:49.465 ]' 00:13:49.465 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.722 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.981 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.912 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.169 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.169 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.170 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.426 00:13:51.426 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.426 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.426 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.683 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.683 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.683 10:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.683 10:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.683 10:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.683 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.683 { 00:13:51.683 "cntlid": 11, 00:13:51.683 "qid": 0, 00:13:51.683 "state": "enabled", 00:13:51.683 "thread": "nvmf_tgt_poll_group_000", 00:13:51.683 "listen_address": { 00:13:51.683 "trtype": "TCP", 00:13:51.683 "adrfam": "IPv4", 00:13:51.683 "traddr": "10.0.0.2", 00:13:51.683 "trsvcid": "4420" 00:13:51.683 }, 00:13:51.683 "peer_address": { 00:13:51.683 "trtype": "TCP", 00:13:51.683 "adrfam": "IPv4", 00:13:51.683 "traddr": "10.0.0.1", 00:13:51.683 "trsvcid": "57906" 00:13:51.683 }, 00:13:51.684 "auth": { 00:13:51.684 "state": "completed", 00:13:51.684 "digest": "sha256", 00:13:51.684 "dhgroup": "ffdhe2048" 00:13:51.684 } 00:13:51.684 } 00:13:51.684 ]' 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.684 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.940 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.899 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.163 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.164 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.164 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.164 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.420 00:13:53.420 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.420 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.420 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.676 { 00:13:53.676 "cntlid": 13, 00:13:53.676 "qid": 0, 00:13:53.676 "state": "enabled", 00:13:53.676 "thread": "nvmf_tgt_poll_group_000", 00:13:53.676 "listen_address": { 00:13:53.676 "trtype": "TCP", 00:13:53.676 "adrfam": "IPv4", 00:13:53.676 "traddr": "10.0.0.2", 00:13:53.676 "trsvcid": "4420" 00:13:53.676 }, 00:13:53.676 "peer_address": { 00:13:53.676 "trtype": "TCP", 00:13:53.676 "adrfam": "IPv4", 00:13:53.676 "traddr": "10.0.0.1", 00:13:53.676 "trsvcid": "57942" 00:13:53.676 }, 00:13:53.676 "auth": { 00:13:53.676 "state": "completed", 00:13:53.676 "digest": "sha256", 00:13:53.676 "dhgroup": "ffdhe2048" 00:13:53.676 } 00:13:53.676 } 00:13:53.676 ]' 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.676 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.932 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.932 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.932 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.932 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.932 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.189 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:55.122 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:55.379 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:55.648 00:13:55.648 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.648 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.648 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.905 { 00:13:55.905 "cntlid": 15, 00:13:55.905 "qid": 0, 00:13:55.905 "state": "enabled", 00:13:55.905 "thread": "nvmf_tgt_poll_group_000", 00:13:55.905 "listen_address": { 00:13:55.905 "trtype": "TCP", 00:13:55.905 "adrfam": "IPv4", 00:13:55.905 "traddr": "10.0.0.2", 00:13:55.905 "trsvcid": "4420" 00:13:55.905 }, 00:13:55.905 "peer_address": { 00:13:55.905 "trtype": "TCP", 00:13:55.905 "adrfam": "IPv4", 00:13:55.905 "traddr": "10.0.0.1", 00:13:55.905 "trsvcid": "57970" 00:13:55.905 }, 00:13:55.905 "auth": { 00:13:55.905 "state": "completed", 00:13:55.905 "digest": "sha256", 00:13:55.905 "dhgroup": "ffdhe2048" 00:13:55.905 } 00:13:55.905 } 00:13:55.905 ]' 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.905 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.162 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.383 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.640 00:13:57.640 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.640 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.640 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.898 { 00:13:57.898 "cntlid": 17, 00:13:57.898 "qid": 0, 00:13:57.898 "state": "enabled", 00:13:57.898 "thread": "nvmf_tgt_poll_group_000", 00:13:57.898 "listen_address": { 00:13:57.898 "trtype": "TCP", 00:13:57.898 "adrfam": "IPv4", 00:13:57.898 "traddr": "10.0.0.2", 00:13:57.898 "trsvcid": "4420" 00:13:57.898 }, 00:13:57.898 "peer_address": { 00:13:57.898 "trtype": "TCP", 00:13:57.898 "adrfam": "IPv4", 00:13:57.898 "traddr": "10.0.0.1", 00:13:57.898 "trsvcid": "57990" 00:13:57.898 }, 00:13:57.898 "auth": { 00:13:57.898 "state": "completed", 00:13:57.898 "digest": "sha256", 00:13:57.898 "dhgroup": "ffdhe3072" 00:13:57.898 } 00:13:57.898 } 00:13:57.898 ]' 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.898 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.155 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:58.155 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.155 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.155 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.155 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.412 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:59.343 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.600 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.857 00:13:59.857 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.857 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.857 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.115 { 00:14:00.115 "cntlid": 19, 00:14:00.115 "qid": 0, 00:14:00.115 "state": "enabled", 00:14:00.115 "thread": "nvmf_tgt_poll_group_000", 00:14:00.115 "listen_address": { 00:14:00.115 "trtype": "TCP", 00:14:00.115 "adrfam": "IPv4", 00:14:00.115 "traddr": "10.0.0.2", 00:14:00.115 "trsvcid": "4420" 00:14:00.115 }, 00:14:00.115 "peer_address": { 00:14:00.115 "trtype": "TCP", 00:14:00.115 "adrfam": "IPv4", 00:14:00.115 "traddr": "10.0.0.1", 00:14:00.115 "trsvcid": "38702" 00:14:00.115 }, 00:14:00.115 "auth": { 00:14:00.115 "state": "completed", 00:14:00.115 "digest": "sha256", 00:14:00.115 "dhgroup": "ffdhe3072" 00:14:00.115 } 00:14:00.115 } 00:14:00.115 ]' 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.115 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.373 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.305 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:01.306 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.563 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.129 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.129 { 00:14:02.129 "cntlid": 21, 00:14:02.129 "qid": 0, 00:14:02.129 "state": "enabled", 00:14:02.129 "thread": "nvmf_tgt_poll_group_000", 00:14:02.129 "listen_address": { 00:14:02.129 "trtype": "TCP", 00:14:02.129 "adrfam": "IPv4", 00:14:02.129 "traddr": "10.0.0.2", 00:14:02.129 "trsvcid": "4420" 00:14:02.129 }, 00:14:02.129 "peer_address": { 00:14:02.129 "trtype": "TCP", 00:14:02.129 "adrfam": "IPv4", 00:14:02.129 "traddr": "10.0.0.1", 00:14:02.129 "trsvcid": "38730" 00:14:02.129 }, 00:14:02.129 "auth": { 00:14:02.129 "state": "completed", 00:14:02.129 "digest": "sha256", 00:14:02.129 "dhgroup": "ffdhe3072" 00:14:02.129 } 00:14:02.129 } 00:14:02.129 ]' 00:14:02.129 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.386 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.643 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:03.573 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.831 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.088 00:14:04.088 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.088 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.088 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.344 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.344 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.344 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.344 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.344 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.344 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.344 { 00:14:04.344 "cntlid": 23, 00:14:04.344 "qid": 0, 00:14:04.344 "state": "enabled", 00:14:04.344 "thread": "nvmf_tgt_poll_group_000", 00:14:04.344 "listen_address": { 00:14:04.344 "trtype": "TCP", 00:14:04.344 "adrfam": "IPv4", 00:14:04.344 "traddr": "10.0.0.2", 00:14:04.344 "trsvcid": "4420" 00:14:04.344 }, 00:14:04.344 "peer_address": { 00:14:04.344 "trtype": "TCP", 00:14:04.344 "adrfam": "IPv4", 00:14:04.344 "traddr": "10.0.0.1", 00:14:04.344 "trsvcid": "38762" 00:14:04.344 }, 00:14:04.344 "auth": { 00:14:04.344 "state": "completed", 00:14:04.344 "digest": "sha256", 00:14:04.344 "dhgroup": "ffdhe3072" 00:14:04.344 } 00:14:04.344 } 00:14:04.344 ]' 00:14:04.345 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.345 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.345 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.345 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:04.345 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.602 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.602 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.602 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.860 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.790 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.356 00:14:06.356 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.356 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.356 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.613 { 00:14:06.613 "cntlid": 25, 00:14:06.613 "qid": 0, 00:14:06.613 "state": "enabled", 00:14:06.613 "thread": "nvmf_tgt_poll_group_000", 00:14:06.613 "listen_address": { 00:14:06.613 "trtype": "TCP", 00:14:06.613 "adrfam": "IPv4", 00:14:06.613 "traddr": "10.0.0.2", 00:14:06.613 "trsvcid": "4420" 00:14:06.613 }, 00:14:06.613 "peer_address": { 00:14:06.613 "trtype": "TCP", 00:14:06.613 "adrfam": "IPv4", 00:14:06.613 "traddr": "10.0.0.1", 00:14:06.613 "trsvcid": "38784" 00:14:06.613 }, 00:14:06.613 "auth": { 00:14:06.613 "state": "completed", 00:14:06.613 "digest": "sha256", 00:14:06.613 "dhgroup": "ffdhe4096" 00:14:06.613 } 00:14:06.613 } 00:14:06.613 ]' 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.613 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.613 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.613 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.613 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.613 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.613 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.871 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:07.804 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.062 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.627 00:14:08.627 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.627 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.627 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.627 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.627 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.627 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.627 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.884 { 00:14:08.884 "cntlid": 27, 00:14:08.884 "qid": 0, 00:14:08.884 "state": "enabled", 00:14:08.884 "thread": "nvmf_tgt_poll_group_000", 00:14:08.884 "listen_address": { 00:14:08.884 "trtype": "TCP", 00:14:08.884 "adrfam": "IPv4", 00:14:08.884 "traddr": "10.0.0.2", 00:14:08.884 "trsvcid": "4420" 00:14:08.884 }, 00:14:08.884 "peer_address": { 00:14:08.884 "trtype": "TCP", 00:14:08.884 "adrfam": "IPv4", 00:14:08.884 "traddr": "10.0.0.1", 00:14:08.884 "trsvcid": "56884" 00:14:08.884 }, 00:14:08.884 "auth": { 00:14:08.884 "state": "completed", 00:14:08.884 "digest": "sha256", 00:14:08.884 "dhgroup": "ffdhe4096" 00:14:08.884 } 00:14:08.884 } 00:14:08.884 ]' 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.884 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.885 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.142 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.074 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.331 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.588 00:14:10.588 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.588 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.588 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.846 { 00:14:10.846 "cntlid": 29, 00:14:10.846 "qid": 0, 00:14:10.846 "state": "enabled", 00:14:10.846 "thread": "nvmf_tgt_poll_group_000", 00:14:10.846 "listen_address": { 00:14:10.846 "trtype": "TCP", 00:14:10.846 "adrfam": "IPv4", 00:14:10.846 "traddr": "10.0.0.2", 00:14:10.846 "trsvcid": "4420" 00:14:10.846 }, 00:14:10.846 "peer_address": { 00:14:10.846 "trtype": "TCP", 00:14:10.846 "adrfam": "IPv4", 00:14:10.846 "traddr": "10.0.0.1", 00:14:10.846 "trsvcid": "56896" 00:14:10.846 }, 00:14:10.846 "auth": { 00:14:10.846 "state": "completed", 00:14:10.846 "digest": "sha256", 00:14:10.846 "dhgroup": "ffdhe4096" 00:14:10.846 } 00:14:10.846 } 00:14:10.846 ]' 00:14:10.846 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.103 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.103 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.103 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:11.104 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.104 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.104 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.104 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.361 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:12.293 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.552 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.809 00:14:12.809 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.809 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.809 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.067 { 00:14:13.067 "cntlid": 31, 00:14:13.067 "qid": 0, 00:14:13.067 "state": "enabled", 00:14:13.067 "thread": "nvmf_tgt_poll_group_000", 00:14:13.067 "listen_address": { 00:14:13.067 "trtype": "TCP", 00:14:13.067 "adrfam": "IPv4", 00:14:13.067 "traddr": "10.0.0.2", 00:14:13.067 "trsvcid": "4420" 00:14:13.067 }, 00:14:13.067 "peer_address": { 00:14:13.067 "trtype": "TCP", 00:14:13.067 "adrfam": "IPv4", 00:14:13.067 "traddr": "10.0.0.1", 00:14:13.067 "trsvcid": "56908" 00:14:13.067 }, 00:14:13.067 "auth": { 00:14:13.067 "state": "completed", 00:14:13.067 "digest": "sha256", 00:14:13.067 "dhgroup": "ffdhe4096" 00:14:13.067 } 00:14:13.067 } 00:14:13.067 ]' 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.067 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.324 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:13.324 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.324 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.324 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.324 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.581 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:14.513 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.770 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.387 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.387 { 00:14:15.387 "cntlid": 33, 00:14:15.387 "qid": 0, 00:14:15.387 "state": "enabled", 00:14:15.387 "thread": "nvmf_tgt_poll_group_000", 00:14:15.387 "listen_address": { 00:14:15.387 "trtype": "TCP", 00:14:15.387 "adrfam": "IPv4", 00:14:15.387 "traddr": "10.0.0.2", 00:14:15.387 "trsvcid": "4420" 00:14:15.387 }, 00:14:15.387 "peer_address": { 00:14:15.387 "trtype": "TCP", 00:14:15.387 "adrfam": "IPv4", 00:14:15.387 "traddr": "10.0.0.1", 00:14:15.387 "trsvcid": "56946" 00:14:15.387 }, 00:14:15.387 "auth": { 00:14:15.387 "state": "completed", 00:14:15.387 "digest": "sha256", 00:14:15.387 "dhgroup": "ffdhe6144" 00:14:15.387 } 00:14:15.387 } 00:14:15.387 ]' 00:14:15.387 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.665 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.665 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.594 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.851 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.414 00:14:17.414 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.414 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.414 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.670 { 00:14:17.670 "cntlid": 35, 00:14:17.670 "qid": 0, 00:14:17.670 "state": "enabled", 00:14:17.670 "thread": "nvmf_tgt_poll_group_000", 00:14:17.670 "listen_address": { 00:14:17.670 "trtype": "TCP", 00:14:17.670 "adrfam": "IPv4", 00:14:17.670 "traddr": "10.0.0.2", 00:14:17.670 "trsvcid": "4420" 00:14:17.670 }, 00:14:17.670 "peer_address": { 00:14:17.670 "trtype": "TCP", 00:14:17.670 "adrfam": "IPv4", 00:14:17.670 "traddr": "10.0.0.1", 00:14:17.670 "trsvcid": "56966" 00:14:17.670 }, 00:14:17.670 "auth": { 00:14:17.670 "state": "completed", 00:14:17.670 "digest": "sha256", 00:14:17.670 "dhgroup": "ffdhe6144" 00:14:17.670 } 00:14:17.670 } 00:14:17.670 ]' 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.670 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.927 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:17.927 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.927 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.927 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.927 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.183 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:19.113 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.371 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.936 00:14:19.936 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.936 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.936 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.194 { 00:14:20.194 "cntlid": 37, 00:14:20.194 "qid": 0, 00:14:20.194 "state": "enabled", 00:14:20.194 "thread": "nvmf_tgt_poll_group_000", 00:14:20.194 "listen_address": { 00:14:20.194 "trtype": "TCP", 00:14:20.194 "adrfam": "IPv4", 00:14:20.194 "traddr": "10.0.0.2", 00:14:20.194 "trsvcid": "4420" 00:14:20.194 }, 00:14:20.194 "peer_address": { 00:14:20.194 "trtype": "TCP", 00:14:20.194 "adrfam": "IPv4", 00:14:20.194 "traddr": "10.0.0.1", 00:14:20.194 "trsvcid": "41892" 00:14:20.194 }, 00:14:20.194 "auth": { 00:14:20.194 "state": "completed", 00:14:20.194 "digest": "sha256", 00:14:20.194 "dhgroup": "ffdhe6144" 00:14:20.194 } 00:14:20.194 } 00:14:20.194 ]' 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.194 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.451 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.382 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.383 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.640 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.206 00:14:22.206 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.206 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.206 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.463 { 00:14:22.463 "cntlid": 39, 00:14:22.463 "qid": 0, 00:14:22.463 "state": "enabled", 00:14:22.463 "thread": "nvmf_tgt_poll_group_000", 00:14:22.463 "listen_address": { 00:14:22.463 "trtype": "TCP", 00:14:22.463 "adrfam": "IPv4", 00:14:22.463 "traddr": "10.0.0.2", 00:14:22.463 "trsvcid": "4420" 00:14:22.463 }, 00:14:22.463 "peer_address": { 00:14:22.463 "trtype": "TCP", 00:14:22.463 "adrfam": "IPv4", 00:14:22.463 "traddr": "10.0.0.1", 00:14:22.463 "trsvcid": "41924" 00:14:22.463 }, 00:14:22.463 "auth": { 00:14:22.463 "state": "completed", 00:14:22.463 "digest": "sha256", 00:14:22.463 "dhgroup": "ffdhe6144" 00:14:22.463 } 00:14:22.463 } 00:14:22.463 ]' 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.463 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.721 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:23.652 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.909 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.910 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.842 00:14:24.842 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.842 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.842 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.099 { 00:14:25.099 "cntlid": 41, 00:14:25.099 "qid": 0, 00:14:25.099 "state": "enabled", 00:14:25.099 "thread": "nvmf_tgt_poll_group_000", 00:14:25.099 "listen_address": { 00:14:25.099 "trtype": "TCP", 00:14:25.099 "adrfam": "IPv4", 00:14:25.099 "traddr": "10.0.0.2", 00:14:25.099 "trsvcid": "4420" 00:14:25.099 }, 00:14:25.099 "peer_address": { 00:14:25.099 "trtype": "TCP", 00:14:25.099 "adrfam": "IPv4", 00:14:25.099 "traddr": "10.0.0.1", 00:14:25.099 "trsvcid": "41960" 00:14:25.099 }, 00:14:25.099 "auth": { 00:14:25.099 "state": "completed", 00:14:25.099 "digest": "sha256", 00:14:25.099 "dhgroup": "ffdhe8192" 00:14:25.099 } 00:14:25.099 } 00:14:25.099 ]' 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.099 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.355 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:26.286 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.544 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.477 00:14:27.477 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.477 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.477 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.735 { 00:14:27.735 "cntlid": 43, 00:14:27.735 "qid": 0, 00:14:27.735 "state": "enabled", 00:14:27.735 "thread": "nvmf_tgt_poll_group_000", 00:14:27.735 "listen_address": { 00:14:27.735 "trtype": "TCP", 00:14:27.735 "adrfam": "IPv4", 00:14:27.735 "traddr": "10.0.0.2", 00:14:27.735 "trsvcid": "4420" 00:14:27.735 }, 00:14:27.735 "peer_address": { 00:14:27.735 "trtype": "TCP", 00:14:27.735 "adrfam": "IPv4", 00:14:27.735 "traddr": "10.0.0.1", 00:14:27.735 "trsvcid": "41984" 00:14:27.735 }, 00:14:27.735 "auth": { 00:14:27.735 "state": "completed", 00:14:27.735 "digest": "sha256", 00:14:27.735 "dhgroup": "ffdhe8192" 00:14:27.735 } 00:14:27.735 } 00:14:27.735 ]' 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.735 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:27.736 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.736 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.736 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.736 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.993 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:28.927 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.184 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.185 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.185 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.118 00:14:30.118 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.118 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.118 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.376 { 00:14:30.376 "cntlid": 45, 00:14:30.376 "qid": 0, 00:14:30.376 "state": "enabled", 00:14:30.376 "thread": "nvmf_tgt_poll_group_000", 00:14:30.376 "listen_address": { 00:14:30.376 "trtype": "TCP", 00:14:30.376 "adrfam": "IPv4", 00:14:30.376 "traddr": "10.0.0.2", 00:14:30.376 "trsvcid": "4420" 00:14:30.376 }, 00:14:30.376 "peer_address": { 00:14:30.376 "trtype": "TCP", 00:14:30.376 "adrfam": "IPv4", 00:14:30.376 "traddr": "10.0.0.1", 00:14:30.376 "trsvcid": "51164" 00:14:30.376 }, 00:14:30.376 "auth": { 00:14:30.376 "state": "completed", 00:14:30.376 "digest": "sha256", 00:14:30.376 "dhgroup": "ffdhe8192" 00:14:30.376 } 00:14:30.376 } 00:14:30.376 ]' 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.376 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.633 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:31.567 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.823 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.751 00:14:32.751 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.751 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.751 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.008 { 00:14:33.008 "cntlid": 47, 00:14:33.008 "qid": 0, 00:14:33.008 "state": "enabled", 00:14:33.008 "thread": "nvmf_tgt_poll_group_000", 00:14:33.008 "listen_address": { 00:14:33.008 "trtype": "TCP", 00:14:33.008 "adrfam": "IPv4", 00:14:33.008 "traddr": "10.0.0.2", 00:14:33.008 "trsvcid": "4420" 00:14:33.008 }, 00:14:33.008 "peer_address": { 00:14:33.008 "trtype": "TCP", 00:14:33.008 "adrfam": "IPv4", 00:14:33.008 "traddr": "10.0.0.1", 00:14:33.008 "trsvcid": "51172" 00:14:33.008 }, 00:14:33.008 "auth": { 00:14:33.008 "state": "completed", 00:14:33.008 "digest": "sha256", 00:14:33.008 "dhgroup": "ffdhe8192" 00:14:33.008 } 00:14:33.008 } 00:14:33.008 ]' 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.008 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.266 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:34.197 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.454 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.711 00:14:34.711 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.711 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.711 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.969 { 00:14:34.969 "cntlid": 49, 00:14:34.969 "qid": 0, 00:14:34.969 "state": "enabled", 00:14:34.969 "thread": "nvmf_tgt_poll_group_000", 00:14:34.969 "listen_address": { 00:14:34.969 "trtype": "TCP", 00:14:34.969 "adrfam": "IPv4", 00:14:34.969 "traddr": "10.0.0.2", 00:14:34.969 "trsvcid": "4420" 00:14:34.969 }, 00:14:34.969 "peer_address": { 00:14:34.969 "trtype": "TCP", 00:14:34.969 "adrfam": "IPv4", 00:14:34.969 "traddr": "10.0.0.1", 00:14:34.969 "trsvcid": "51198" 00:14:34.969 }, 00:14:34.969 "auth": { 00:14:34.969 "state": "completed", 00:14:34.969 "digest": "sha384", 00:14:34.969 "dhgroup": "null" 00:14:34.969 } 00:14:34.969 } 00:14:34.969 ]' 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.969 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.226 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:35.226 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.226 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.226 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.226 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.484 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.416 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.981 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.981 { 00:14:36.981 "cntlid": 51, 00:14:36.981 "qid": 0, 00:14:36.981 "state": "enabled", 00:14:36.981 "thread": "nvmf_tgt_poll_group_000", 00:14:36.981 "listen_address": { 00:14:36.981 "trtype": "TCP", 00:14:36.981 "adrfam": "IPv4", 00:14:36.981 "traddr": "10.0.0.2", 00:14:36.981 "trsvcid": "4420" 00:14:36.981 }, 00:14:36.981 "peer_address": { 00:14:36.981 "trtype": "TCP", 00:14:36.981 "adrfam": "IPv4", 00:14:36.981 "traddr": "10.0.0.1", 00:14:36.981 "trsvcid": "51212" 00:14:36.981 }, 00:14:36.981 "auth": { 00:14:36.981 "state": "completed", 00:14:36.981 "digest": "sha384", 00:14:36.981 "dhgroup": "null" 00:14:36.981 } 00:14:36.981 } 00:14:36.981 ]' 00:14:36.981 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.238 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.495 10:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:38.432 10:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:38.433 10:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.738 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.019 00:14:39.019 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.020 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.020 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.020 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.020 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.020 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.020 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.275 { 00:14:39.275 "cntlid": 53, 00:14:39.275 "qid": 0, 00:14:39.275 "state": "enabled", 00:14:39.275 "thread": "nvmf_tgt_poll_group_000", 00:14:39.275 "listen_address": { 00:14:39.275 "trtype": "TCP", 00:14:39.275 "adrfam": "IPv4", 00:14:39.275 "traddr": "10.0.0.2", 00:14:39.275 "trsvcid": "4420" 00:14:39.275 }, 00:14:39.275 "peer_address": { 00:14:39.275 "trtype": "TCP", 00:14:39.275 "adrfam": "IPv4", 00:14:39.275 "traddr": "10.0.0.1", 00:14:39.275 "trsvcid": "51662" 00:14:39.275 }, 00:14:39.275 "auth": { 00:14:39.275 "state": "completed", 00:14:39.275 "digest": "sha384", 00:14:39.275 "dhgroup": "null" 00:14:39.275 } 00:14:39.275 } 00:14:39.275 ]' 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.275 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.531 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.461 10:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.718 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.975 00:14:40.975 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.975 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.975 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.231 { 00:14:41.231 "cntlid": 55, 00:14:41.231 "qid": 0, 00:14:41.231 "state": "enabled", 00:14:41.231 "thread": "nvmf_tgt_poll_group_000", 00:14:41.231 "listen_address": { 00:14:41.231 "trtype": "TCP", 00:14:41.231 "adrfam": "IPv4", 00:14:41.231 "traddr": "10.0.0.2", 00:14:41.231 "trsvcid": "4420" 00:14:41.231 }, 00:14:41.231 "peer_address": { 00:14:41.231 "trtype": "TCP", 00:14:41.231 "adrfam": "IPv4", 00:14:41.231 "traddr": "10.0.0.1", 00:14:41.231 "trsvcid": "51696" 00:14:41.231 }, 00:14:41.231 "auth": { 00:14:41.231 "state": "completed", 00:14:41.231 "digest": "sha384", 00:14:41.231 "dhgroup": "null" 00:14:41.231 } 00:14:41.231 } 00:14:41.231 ]' 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.231 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.487 10:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.419 10:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.677 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.934 00:14:42.934 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.934 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.935 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.192 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.192 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.192 10:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.192 10:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.192 10:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.192 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.192 { 00:14:43.192 "cntlid": 57, 00:14:43.193 "qid": 0, 00:14:43.193 "state": "enabled", 00:14:43.193 "thread": "nvmf_tgt_poll_group_000", 00:14:43.193 "listen_address": { 00:14:43.193 "trtype": "TCP", 00:14:43.193 "adrfam": "IPv4", 00:14:43.193 "traddr": "10.0.0.2", 00:14:43.193 "trsvcid": "4420" 00:14:43.193 }, 00:14:43.193 "peer_address": { 00:14:43.193 "trtype": "TCP", 00:14:43.193 "adrfam": "IPv4", 00:14:43.193 "traddr": "10.0.0.1", 00:14:43.193 "trsvcid": "51722" 00:14:43.193 }, 00:14:43.193 "auth": { 00:14:43.193 "state": "completed", 00:14:43.193 "digest": "sha384", 00:14:43.193 "dhgroup": "ffdhe2048" 00:14:43.193 } 00:14:43.193 } 00:14:43.193 ]' 00:14:43.193 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.450 10:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.708 10:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:14:44.639 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:44.640 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.896 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.154 00:14:45.154 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.154 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.154 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.411 { 00:14:45.411 "cntlid": 59, 00:14:45.411 "qid": 0, 00:14:45.411 "state": "enabled", 00:14:45.411 "thread": "nvmf_tgt_poll_group_000", 00:14:45.411 "listen_address": { 00:14:45.411 "trtype": "TCP", 00:14:45.411 "adrfam": "IPv4", 00:14:45.411 "traddr": "10.0.0.2", 00:14:45.411 "trsvcid": "4420" 00:14:45.411 }, 00:14:45.411 "peer_address": { 00:14:45.411 "trtype": "TCP", 00:14:45.411 "adrfam": "IPv4", 00:14:45.411 "traddr": "10.0.0.1", 00:14:45.411 "trsvcid": "51752" 00:14:45.411 }, 00:14:45.411 "auth": { 00:14:45.411 "state": "completed", 00:14:45.411 "digest": "sha384", 00:14:45.411 "dhgroup": "ffdhe2048" 00:14:45.411 } 00:14:45.411 } 00:14:45.411 ]' 00:14:45.411 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.669 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.669 10:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.669 10:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.669 10:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.669 10:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.669 10:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.669 10:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.926 10:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:46.859 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.117 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.374 00:14:47.374 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.374 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.374 10:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.632 { 00:14:47.632 "cntlid": 61, 00:14:47.632 "qid": 0, 00:14:47.632 "state": "enabled", 00:14:47.632 "thread": "nvmf_tgt_poll_group_000", 00:14:47.632 "listen_address": { 00:14:47.632 "trtype": "TCP", 00:14:47.632 "adrfam": "IPv4", 00:14:47.632 "traddr": "10.0.0.2", 00:14:47.632 "trsvcid": "4420" 00:14:47.632 }, 00:14:47.632 "peer_address": { 00:14:47.632 "trtype": "TCP", 00:14:47.632 "adrfam": "IPv4", 00:14:47.632 "traddr": "10.0.0.1", 00:14:47.632 "trsvcid": "51774" 00:14:47.632 }, 00:14:47.632 "auth": { 00:14:47.632 "state": "completed", 00:14:47.632 "digest": "sha384", 00:14:47.632 "dhgroup": "ffdhe2048" 00:14:47.632 } 00:14:47.632 } 00:14:47.632 ]' 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.632 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.890 10:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:48.823 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.081 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.338 00:14:49.338 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.338 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.338 10:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.596 { 00:14:49.596 "cntlid": 63, 00:14:49.596 "qid": 0, 00:14:49.596 "state": "enabled", 00:14:49.596 "thread": "nvmf_tgt_poll_group_000", 00:14:49.596 "listen_address": { 00:14:49.596 "trtype": "TCP", 00:14:49.596 "adrfam": "IPv4", 00:14:49.596 "traddr": "10.0.0.2", 00:14:49.596 "trsvcid": "4420" 00:14:49.596 }, 00:14:49.596 "peer_address": { 00:14:49.596 "trtype": "TCP", 00:14:49.596 "adrfam": "IPv4", 00:14:49.596 "traddr": "10.0.0.1", 00:14:49.596 "trsvcid": "42558" 00:14:49.596 }, 00:14:49.596 "auth": { 00:14:49.596 "state": "completed", 00:14:49.596 "digest": "sha384", 00:14:49.596 "dhgroup": "ffdhe2048" 00:14:49.596 } 00:14:49.596 } 00:14:49.596 ]' 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.596 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.854 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.854 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.854 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.854 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.854 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.113 10:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:51.046 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.304 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.562 00:14:51.562 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.562 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.562 10:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.819 { 00:14:51.819 "cntlid": 65, 00:14:51.819 "qid": 0, 00:14:51.819 "state": "enabled", 00:14:51.819 "thread": "nvmf_tgt_poll_group_000", 00:14:51.819 "listen_address": { 00:14:51.819 "trtype": "TCP", 00:14:51.819 "adrfam": "IPv4", 00:14:51.819 "traddr": "10.0.0.2", 00:14:51.819 "trsvcid": "4420" 00:14:51.819 }, 00:14:51.819 "peer_address": { 00:14:51.819 "trtype": "TCP", 00:14:51.819 "adrfam": "IPv4", 00:14:51.819 "traddr": "10.0.0.1", 00:14:51.819 "trsvcid": "42598" 00:14:51.819 }, 00:14:51.819 "auth": { 00:14:51.819 "state": "completed", 00:14:51.819 "digest": "sha384", 00:14:51.819 "dhgroup": "ffdhe3072" 00:14:51.819 } 00:14:51.819 } 00:14:51.819 ]' 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.819 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.076 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.076 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.076 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.333 10:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.267 10:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.832 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.832 { 00:14:53.832 "cntlid": 67, 00:14:53.832 "qid": 0, 00:14:53.832 "state": "enabled", 00:14:53.832 "thread": "nvmf_tgt_poll_group_000", 00:14:53.832 "listen_address": { 00:14:53.832 "trtype": "TCP", 00:14:53.832 "adrfam": "IPv4", 00:14:53.832 "traddr": "10.0.0.2", 00:14:53.832 "trsvcid": "4420" 00:14:53.832 }, 00:14:53.832 "peer_address": { 00:14:53.832 "trtype": "TCP", 00:14:53.832 "adrfam": "IPv4", 00:14:53.832 "traddr": "10.0.0.1", 00:14:53.832 "trsvcid": "42608" 00:14:53.832 }, 00:14:53.832 "auth": { 00:14:53.832 "state": "completed", 00:14:53.832 "digest": "sha384", 00:14:53.832 "dhgroup": "ffdhe3072" 00:14:53.832 } 00:14:53.832 } 00:14:53.832 ]' 00:14:53.832 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.089 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.089 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.089 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.089 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.090 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.090 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.090 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.347 10:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:55.276 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.533 10:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.790 00:14:55.790 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.790 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.790 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.048 { 00:14:56.048 "cntlid": 69, 00:14:56.048 "qid": 0, 00:14:56.048 "state": "enabled", 00:14:56.048 "thread": "nvmf_tgt_poll_group_000", 00:14:56.048 "listen_address": { 00:14:56.048 "trtype": "TCP", 00:14:56.048 "adrfam": "IPv4", 00:14:56.048 "traddr": "10.0.0.2", 00:14:56.048 "trsvcid": "4420" 00:14:56.048 }, 00:14:56.048 "peer_address": { 00:14:56.048 "trtype": "TCP", 00:14:56.048 "adrfam": "IPv4", 00:14:56.048 "traddr": "10.0.0.1", 00:14:56.048 "trsvcid": "42626" 00:14:56.048 }, 00:14:56.048 "auth": { 00:14:56.048 "state": "completed", 00:14:56.048 "digest": "sha384", 00:14:56.048 "dhgroup": "ffdhe3072" 00:14:56.048 } 00:14:56.048 } 00:14:56.048 ]' 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.048 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.306 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.306 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.306 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.564 10:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.495 10:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.060 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.060 { 00:14:58.060 "cntlid": 71, 00:14:58.060 "qid": 0, 00:14:58.060 "state": "enabled", 00:14:58.060 "thread": "nvmf_tgt_poll_group_000", 00:14:58.060 "listen_address": { 00:14:58.060 "trtype": "TCP", 00:14:58.060 "adrfam": "IPv4", 00:14:58.060 "traddr": "10.0.0.2", 00:14:58.060 "trsvcid": "4420" 00:14:58.060 }, 00:14:58.060 "peer_address": { 00:14:58.060 "trtype": "TCP", 00:14:58.060 "adrfam": "IPv4", 00:14:58.060 "traddr": "10.0.0.1", 00:14:58.060 "trsvcid": "42666" 00:14:58.060 }, 00:14:58.060 "auth": { 00:14:58.060 "state": "completed", 00:14:58.060 "digest": "sha384", 00:14:58.060 "dhgroup": "ffdhe3072" 00:14:58.060 } 00:14:58.060 } 00:14:58.060 ]' 00:14:58.060 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.316 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.573 10:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:59.506 10:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.763 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.020 00:15:00.021 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.021 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.021 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.277 { 00:15:00.277 "cntlid": 73, 00:15:00.277 "qid": 0, 00:15:00.277 "state": "enabled", 00:15:00.277 "thread": "nvmf_tgt_poll_group_000", 00:15:00.277 "listen_address": { 00:15:00.277 "trtype": "TCP", 00:15:00.277 "adrfam": "IPv4", 00:15:00.277 "traddr": "10.0.0.2", 00:15:00.277 "trsvcid": "4420" 00:15:00.277 }, 00:15:00.277 "peer_address": { 00:15:00.277 "trtype": "TCP", 00:15:00.277 "adrfam": "IPv4", 00:15:00.277 "traddr": "10.0.0.1", 00:15:00.277 "trsvcid": "60056" 00:15:00.277 }, 00:15:00.277 "auth": { 00:15:00.277 "state": "completed", 00:15:00.277 "digest": "sha384", 00:15:00.277 "dhgroup": "ffdhe4096" 00:15:00.277 } 00:15:00.277 } 00:15:00.277 ]' 00:15:00.277 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.534 10:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.792 10:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.786 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.351 00:15:02.351 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.351 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.351 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.608 { 00:15:02.608 "cntlid": 75, 00:15:02.608 "qid": 0, 00:15:02.608 "state": "enabled", 00:15:02.608 "thread": "nvmf_tgt_poll_group_000", 00:15:02.608 "listen_address": { 00:15:02.608 "trtype": "TCP", 00:15:02.608 "adrfam": "IPv4", 00:15:02.608 "traddr": "10.0.0.2", 00:15:02.608 "trsvcid": "4420" 00:15:02.608 }, 00:15:02.608 "peer_address": { 00:15:02.608 "trtype": "TCP", 00:15:02.608 "adrfam": "IPv4", 00:15:02.608 "traddr": "10.0.0.1", 00:15:02.608 "trsvcid": "60084" 00:15:02.608 }, 00:15:02.608 "auth": { 00:15:02.608 "state": "completed", 00:15:02.608 "digest": "sha384", 00:15:02.608 "dhgroup": "ffdhe4096" 00:15:02.608 } 00:15:02.608 } 00:15:02.608 ]' 00:15:02.608 10:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.608 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.866 10:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:03.799 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.057 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.623 00:15:04.623 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.623 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.623 10:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.623 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.623 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.623 10:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.623 10:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.880 { 00:15:04.880 "cntlid": 77, 00:15:04.880 "qid": 0, 00:15:04.880 "state": "enabled", 00:15:04.880 "thread": "nvmf_tgt_poll_group_000", 00:15:04.880 "listen_address": { 00:15:04.880 "trtype": "TCP", 00:15:04.880 "adrfam": "IPv4", 00:15:04.880 "traddr": "10.0.0.2", 00:15:04.880 "trsvcid": "4420" 00:15:04.880 }, 00:15:04.880 "peer_address": { 00:15:04.880 "trtype": "TCP", 00:15:04.880 "adrfam": "IPv4", 00:15:04.880 "traddr": "10.0.0.1", 00:15:04.880 "trsvcid": "60116" 00:15:04.880 }, 00:15:04.880 "auth": { 00:15:04.880 "state": "completed", 00:15:04.880 "digest": "sha384", 00:15:04.880 "dhgroup": "ffdhe4096" 00:15:04.880 } 00:15:04.880 } 00:15:04.880 ]' 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.880 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.137 10:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.068 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:06.325 10:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.326 10:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.326 10:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.326 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.326 10:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.582 00:15:06.583 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.583 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.583 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.840 { 00:15:06.840 "cntlid": 79, 00:15:06.840 "qid": 0, 00:15:06.840 "state": "enabled", 00:15:06.840 "thread": "nvmf_tgt_poll_group_000", 00:15:06.840 "listen_address": { 00:15:06.840 "trtype": "TCP", 00:15:06.840 "adrfam": "IPv4", 00:15:06.840 "traddr": "10.0.0.2", 00:15:06.840 "trsvcid": "4420" 00:15:06.840 }, 00:15:06.840 "peer_address": { 00:15:06.840 "trtype": "TCP", 00:15:06.840 "adrfam": "IPv4", 00:15:06.840 "traddr": "10.0.0.1", 00:15:06.840 "trsvcid": "60134" 00:15:06.840 }, 00:15:06.840 "auth": { 00:15:06.840 "state": "completed", 00:15:06.840 "digest": "sha384", 00:15:06.840 "dhgroup": "ffdhe4096" 00:15:06.840 } 00:15:06.840 } 00:15:06.840 ]' 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.840 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.097 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:07.097 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.097 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.097 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.097 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.353 10:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:08.283 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:08.539 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:08.539 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.539 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.540 10:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.105 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.105 { 00:15:09.105 "cntlid": 81, 00:15:09.105 "qid": 0, 00:15:09.105 "state": "enabled", 00:15:09.105 "thread": "nvmf_tgt_poll_group_000", 00:15:09.105 "listen_address": { 00:15:09.105 "trtype": "TCP", 00:15:09.105 "adrfam": "IPv4", 00:15:09.105 "traddr": "10.0.0.2", 00:15:09.105 "trsvcid": "4420" 00:15:09.105 }, 00:15:09.105 "peer_address": { 00:15:09.105 "trtype": "TCP", 00:15:09.105 "adrfam": "IPv4", 00:15:09.105 "traddr": "10.0.0.1", 00:15:09.105 "trsvcid": "50148" 00:15:09.105 }, 00:15:09.105 "auth": { 00:15:09.105 "state": "completed", 00:15:09.105 "digest": "sha384", 00:15:09.105 "dhgroup": "ffdhe6144" 00:15:09.105 } 00:15:09.105 } 00:15:09.105 ]' 00:15:09.105 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.362 10:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.619 10:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:10.550 10:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.807 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.372 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.372 { 00:15:11.372 "cntlid": 83, 00:15:11.372 "qid": 0, 00:15:11.372 "state": "enabled", 00:15:11.372 "thread": "nvmf_tgt_poll_group_000", 00:15:11.372 "listen_address": { 00:15:11.372 "trtype": "TCP", 00:15:11.372 "adrfam": "IPv4", 00:15:11.372 "traddr": "10.0.0.2", 00:15:11.372 "trsvcid": "4420" 00:15:11.372 }, 00:15:11.372 "peer_address": { 00:15:11.372 "trtype": "TCP", 00:15:11.372 "adrfam": "IPv4", 00:15:11.372 "traddr": "10.0.0.1", 00:15:11.372 "trsvcid": "50176" 00:15:11.372 }, 00:15:11.372 "auth": { 00:15:11.372 "state": "completed", 00:15:11.372 "digest": "sha384", 00:15:11.372 "dhgroup": "ffdhe6144" 00:15:11.372 } 00:15:11.372 } 00:15:11.372 ]' 00:15:11.372 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.631 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.631 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.631 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.631 10:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.631 10:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.631 10:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.631 10:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.888 10:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:12.820 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.078 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.642 00:15:13.642 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.642 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.642 10:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.900 { 00:15:13.900 "cntlid": 85, 00:15:13.900 "qid": 0, 00:15:13.900 "state": "enabled", 00:15:13.900 "thread": "nvmf_tgt_poll_group_000", 00:15:13.900 "listen_address": { 00:15:13.900 "trtype": "TCP", 00:15:13.900 "adrfam": "IPv4", 00:15:13.900 "traddr": "10.0.0.2", 00:15:13.900 "trsvcid": "4420" 00:15:13.900 }, 00:15:13.900 "peer_address": { 00:15:13.900 "trtype": "TCP", 00:15:13.900 "adrfam": "IPv4", 00:15:13.900 "traddr": "10.0.0.1", 00:15:13.900 "trsvcid": "50216" 00:15:13.900 }, 00:15:13.900 "auth": { 00:15:13.900 "state": "completed", 00:15:13.900 "digest": "sha384", 00:15:13.900 "dhgroup": "ffdhe6144" 00:15:13.900 } 00:15:13.900 } 00:15:13.900 ]' 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.900 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.156 10:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:15.086 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.343 10:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.906 00:15:15.907 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.907 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.907 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.163 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.163 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.163 10:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.164 { 00:15:16.164 "cntlid": 87, 00:15:16.164 "qid": 0, 00:15:16.164 "state": "enabled", 00:15:16.164 "thread": "nvmf_tgt_poll_group_000", 00:15:16.164 "listen_address": { 00:15:16.164 "trtype": "TCP", 00:15:16.164 "adrfam": "IPv4", 00:15:16.164 "traddr": "10.0.0.2", 00:15:16.164 "trsvcid": "4420" 00:15:16.164 }, 00:15:16.164 "peer_address": { 00:15:16.164 "trtype": "TCP", 00:15:16.164 "adrfam": "IPv4", 00:15:16.164 "traddr": "10.0.0.1", 00:15:16.164 "trsvcid": "50246" 00:15:16.164 }, 00:15:16.164 "auth": { 00:15:16.164 "state": "completed", 00:15:16.164 "digest": "sha384", 00:15:16.164 "dhgroup": "ffdhe6144" 00:15:16.164 } 00:15:16.164 } 00:15:16.164 ]' 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.164 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.421 10:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:17.353 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.611 10:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.542 00:15:18.542 10:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.542 10:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.542 10:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.542 { 00:15:18.542 "cntlid": 89, 00:15:18.542 "qid": 0, 00:15:18.542 "state": "enabled", 00:15:18.542 "thread": "nvmf_tgt_poll_group_000", 00:15:18.542 "listen_address": { 00:15:18.542 "trtype": "TCP", 00:15:18.542 "adrfam": "IPv4", 00:15:18.542 "traddr": "10.0.0.2", 00:15:18.542 "trsvcid": "4420" 00:15:18.542 }, 00:15:18.542 "peer_address": { 00:15:18.542 "trtype": "TCP", 00:15:18.542 "adrfam": "IPv4", 00:15:18.542 "traddr": "10.0.0.1", 00:15:18.542 "trsvcid": "50282" 00:15:18.542 }, 00:15:18.542 "auth": { 00:15:18.542 "state": "completed", 00:15:18.542 "digest": "sha384", 00:15:18.542 "dhgroup": "ffdhe8192" 00:15:18.542 } 00:15:18.542 } 00:15:18.542 ]' 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.542 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.799 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.799 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.799 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.799 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.799 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.056 10:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:19.987 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.244 10:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.176 00:15:21.176 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.176 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.176 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.434 { 00:15:21.434 "cntlid": 91, 00:15:21.434 "qid": 0, 00:15:21.434 "state": "enabled", 00:15:21.434 "thread": "nvmf_tgt_poll_group_000", 00:15:21.434 "listen_address": { 00:15:21.434 "trtype": "TCP", 00:15:21.434 "adrfam": "IPv4", 00:15:21.434 "traddr": "10.0.0.2", 00:15:21.434 "trsvcid": "4420" 00:15:21.434 }, 00:15:21.434 "peer_address": { 00:15:21.434 "trtype": "TCP", 00:15:21.434 "adrfam": "IPv4", 00:15:21.434 "traddr": "10.0.0.1", 00:15:21.434 "trsvcid": "44096" 00:15:21.434 }, 00:15:21.434 "auth": { 00:15:21.434 "state": "completed", 00:15:21.434 "digest": "sha384", 00:15:21.434 "dhgroup": "ffdhe8192" 00:15:21.434 } 00:15:21.434 } 00:15:21.434 ]' 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.434 10:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.692 10:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:22.624 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.625 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.884 10:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.840 00:15:23.840 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.840 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.840 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.097 { 00:15:24.097 "cntlid": 93, 00:15:24.097 "qid": 0, 00:15:24.097 "state": "enabled", 00:15:24.097 "thread": "nvmf_tgt_poll_group_000", 00:15:24.097 "listen_address": { 00:15:24.097 "trtype": "TCP", 00:15:24.097 "adrfam": "IPv4", 00:15:24.097 "traddr": "10.0.0.2", 00:15:24.097 "trsvcid": "4420" 00:15:24.097 }, 00:15:24.097 "peer_address": { 00:15:24.097 "trtype": "TCP", 00:15:24.097 "adrfam": "IPv4", 00:15:24.097 "traddr": "10.0.0.1", 00:15:24.097 "trsvcid": "44106" 00:15:24.097 }, 00:15:24.097 "auth": { 00:15:24.097 "state": "completed", 00:15:24.097 "digest": "sha384", 00:15:24.097 "dhgroup": "ffdhe8192" 00:15:24.097 } 00:15:24.097 } 00:15:24.097 ]' 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.097 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.354 10:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.285 10:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.543 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.477 00:15:26.477 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.477 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.477 10:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.734 { 00:15:26.734 "cntlid": 95, 00:15:26.734 "qid": 0, 00:15:26.734 "state": "enabled", 00:15:26.734 "thread": "nvmf_tgt_poll_group_000", 00:15:26.734 "listen_address": { 00:15:26.734 "trtype": "TCP", 00:15:26.734 "adrfam": "IPv4", 00:15:26.734 "traddr": "10.0.0.2", 00:15:26.734 "trsvcid": "4420" 00:15:26.734 }, 00:15:26.734 "peer_address": { 00:15:26.734 "trtype": "TCP", 00:15:26.734 "adrfam": "IPv4", 00:15:26.734 "traddr": "10.0.0.1", 00:15:26.734 "trsvcid": "44120" 00:15:26.734 }, 00:15:26.734 "auth": { 00:15:26.734 "state": "completed", 00:15:26.734 "digest": "sha384", 00:15:26.734 "dhgroup": "ffdhe8192" 00:15:26.734 } 00:15:26.734 } 00:15:26.734 ]' 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.734 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.992 10:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.925 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:28.182 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.183 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.440 00:15:28.440 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.440 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.440 10:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.698 { 00:15:28.698 "cntlid": 97, 00:15:28.698 "qid": 0, 00:15:28.698 "state": "enabled", 00:15:28.698 "thread": "nvmf_tgt_poll_group_000", 00:15:28.698 "listen_address": { 00:15:28.698 "trtype": "TCP", 00:15:28.698 "adrfam": "IPv4", 00:15:28.698 "traddr": "10.0.0.2", 00:15:28.698 "trsvcid": "4420" 00:15:28.698 }, 00:15:28.698 "peer_address": { 00:15:28.698 "trtype": "TCP", 00:15:28.698 "adrfam": "IPv4", 00:15:28.698 "traddr": "10.0.0.1", 00:15:28.698 "trsvcid": "52996" 00:15:28.698 }, 00:15:28.698 "auth": { 00:15:28.698 "state": "completed", 00:15:28.698 "digest": "sha512", 00:15:28.698 "dhgroup": "null" 00:15:28.698 } 00:15:28.698 } 00:15:28.698 ]' 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.698 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.956 10:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.888 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.146 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.403 00:15:30.403 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.403 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.403 10:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.660 { 00:15:30.660 "cntlid": 99, 00:15:30.660 "qid": 0, 00:15:30.660 "state": "enabled", 00:15:30.660 "thread": "nvmf_tgt_poll_group_000", 00:15:30.660 "listen_address": { 00:15:30.660 "trtype": "TCP", 00:15:30.660 "adrfam": "IPv4", 00:15:30.660 "traddr": "10.0.0.2", 00:15:30.660 "trsvcid": "4420" 00:15:30.660 }, 00:15:30.660 "peer_address": { 00:15:30.660 "trtype": "TCP", 00:15:30.660 "adrfam": "IPv4", 00:15:30.660 "traddr": "10.0.0.1", 00:15:30.660 "trsvcid": "53010" 00:15:30.660 }, 00:15:30.660 "auth": { 00:15:30.660 "state": "completed", 00:15:30.660 "digest": "sha512", 00:15:30.660 "dhgroup": "null" 00:15:30.660 } 00:15:30.660 } 00:15:30.660 ]' 00:15:30.660 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.917 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.184 10:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:32.128 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.128 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.128 10:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.128 10:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.128 10:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.128 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.129 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.694 00:15:32.694 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.694 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.694 10:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.694 { 00:15:32.694 "cntlid": 101, 00:15:32.694 "qid": 0, 00:15:32.694 "state": "enabled", 00:15:32.694 "thread": "nvmf_tgt_poll_group_000", 00:15:32.694 "listen_address": { 00:15:32.694 "trtype": "TCP", 00:15:32.694 "adrfam": "IPv4", 00:15:32.694 "traddr": "10.0.0.2", 00:15:32.694 "trsvcid": "4420" 00:15:32.694 }, 00:15:32.694 "peer_address": { 00:15:32.694 "trtype": "TCP", 00:15:32.694 "adrfam": "IPv4", 00:15:32.694 "traddr": "10.0.0.1", 00:15:32.694 "trsvcid": "53032" 00:15:32.694 }, 00:15:32.694 "auth": { 00:15:32.694 "state": "completed", 00:15:32.694 "digest": "sha512", 00:15:32.694 "dhgroup": "null" 00:15:32.694 } 00:15:32.694 } 00:15:32.694 ]' 00:15:32.694 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.951 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.951 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.951 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.951 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.951 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.951 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.952 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.209 10:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:34.141 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.398 10:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.656 00:15:34.656 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.656 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.656 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.913 { 00:15:34.913 "cntlid": 103, 00:15:34.913 "qid": 0, 00:15:34.913 "state": "enabled", 00:15:34.913 "thread": "nvmf_tgt_poll_group_000", 00:15:34.913 "listen_address": { 00:15:34.913 "trtype": "TCP", 00:15:34.913 "adrfam": "IPv4", 00:15:34.913 "traddr": "10.0.0.2", 00:15:34.913 "trsvcid": "4420" 00:15:34.913 }, 00:15:34.913 "peer_address": { 00:15:34.913 "trtype": "TCP", 00:15:34.913 "adrfam": "IPv4", 00:15:34.913 "traddr": "10.0.0.1", 00:15:34.913 "trsvcid": "53060" 00:15:34.913 }, 00:15:34.913 "auth": { 00:15:34.913 "state": "completed", 00:15:34.913 "digest": "sha512", 00:15:34.913 "dhgroup": "null" 00:15:34.913 } 00:15:34.913 } 00:15:34.913 ]' 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.913 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.170 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.170 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.170 10:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:36.103 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.363 10:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.648 10:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.648 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.648 10:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.906 00:15:36.906 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.906 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.906 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.163 { 00:15:37.163 "cntlid": 105, 00:15:37.163 "qid": 0, 00:15:37.163 "state": "enabled", 00:15:37.163 "thread": "nvmf_tgt_poll_group_000", 00:15:37.163 "listen_address": { 00:15:37.163 "trtype": "TCP", 00:15:37.163 "adrfam": "IPv4", 00:15:37.163 "traddr": "10.0.0.2", 00:15:37.163 "trsvcid": "4420" 00:15:37.163 }, 00:15:37.163 "peer_address": { 00:15:37.163 "trtype": "TCP", 00:15:37.163 "adrfam": "IPv4", 00:15:37.163 "traddr": "10.0.0.1", 00:15:37.163 "trsvcid": "53086" 00:15:37.163 }, 00:15:37.163 "auth": { 00:15:37.163 "state": "completed", 00:15:37.163 "digest": "sha512", 00:15:37.163 "dhgroup": "ffdhe2048" 00:15:37.163 } 00:15:37.163 } 00:15:37.163 ]' 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.163 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.421 10:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.387 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.645 10:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.904 00:15:38.904 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.904 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.904 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.162 { 00:15:39.162 "cntlid": 107, 00:15:39.162 "qid": 0, 00:15:39.162 "state": "enabled", 00:15:39.162 "thread": "nvmf_tgt_poll_group_000", 00:15:39.162 "listen_address": { 00:15:39.162 "trtype": "TCP", 00:15:39.162 "adrfam": "IPv4", 00:15:39.162 "traddr": "10.0.0.2", 00:15:39.162 "trsvcid": "4420" 00:15:39.162 }, 00:15:39.162 "peer_address": { 00:15:39.162 "trtype": "TCP", 00:15:39.162 "adrfam": "IPv4", 00:15:39.162 "traddr": "10.0.0.1", 00:15:39.162 "trsvcid": "44128" 00:15:39.162 }, 00:15:39.162 "auth": { 00:15:39.162 "state": "completed", 00:15:39.162 "digest": "sha512", 00:15:39.162 "dhgroup": "ffdhe2048" 00:15:39.162 } 00:15:39.162 } 00:15:39.162 ]' 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.162 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.419 10:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.352 10:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.610 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.175 00:15:41.175 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.175 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.175 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.433 { 00:15:41.433 "cntlid": 109, 00:15:41.433 "qid": 0, 00:15:41.433 "state": "enabled", 00:15:41.433 "thread": "nvmf_tgt_poll_group_000", 00:15:41.433 "listen_address": { 00:15:41.433 "trtype": "TCP", 00:15:41.433 "adrfam": "IPv4", 00:15:41.433 "traddr": "10.0.0.2", 00:15:41.433 "trsvcid": "4420" 00:15:41.433 }, 00:15:41.433 "peer_address": { 00:15:41.433 "trtype": "TCP", 00:15:41.433 "adrfam": "IPv4", 00:15:41.433 "traddr": "10.0.0.1", 00:15:41.433 "trsvcid": "44158" 00:15:41.433 }, 00:15:41.433 "auth": { 00:15:41.433 "state": "completed", 00:15:41.433 "digest": "sha512", 00:15:41.433 "dhgroup": "ffdhe2048" 00:15:41.433 } 00:15:41.433 } 00:15:41.433 ]' 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.433 10:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.691 10:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:42.623 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.624 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.881 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.139 00:15:43.396 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.396 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.396 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.654 { 00:15:43.654 "cntlid": 111, 00:15:43.654 "qid": 0, 00:15:43.654 "state": "enabled", 00:15:43.654 "thread": "nvmf_tgt_poll_group_000", 00:15:43.654 "listen_address": { 00:15:43.654 "trtype": "TCP", 00:15:43.654 "adrfam": "IPv4", 00:15:43.654 "traddr": "10.0.0.2", 00:15:43.654 "trsvcid": "4420" 00:15:43.654 }, 00:15:43.654 "peer_address": { 00:15:43.654 "trtype": "TCP", 00:15:43.654 "adrfam": "IPv4", 00:15:43.654 "traddr": "10.0.0.1", 00:15:43.654 "trsvcid": "44182" 00:15:43.654 }, 00:15:43.654 "auth": { 00:15:43.654 "state": "completed", 00:15:43.654 "digest": "sha512", 00:15:43.654 "dhgroup": "ffdhe2048" 00:15:43.654 } 00:15:43.654 } 00:15:43.654 ]' 00:15:43.654 10:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.654 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.912 10:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.917 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.173 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.429 00:15:45.429 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.429 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.429 10:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.686 { 00:15:45.686 "cntlid": 113, 00:15:45.686 "qid": 0, 00:15:45.686 "state": "enabled", 00:15:45.686 "thread": "nvmf_tgt_poll_group_000", 00:15:45.686 "listen_address": { 00:15:45.686 "trtype": "TCP", 00:15:45.686 "adrfam": "IPv4", 00:15:45.686 "traddr": "10.0.0.2", 00:15:45.686 "trsvcid": "4420" 00:15:45.686 }, 00:15:45.686 "peer_address": { 00:15:45.686 "trtype": "TCP", 00:15:45.686 "adrfam": "IPv4", 00:15:45.686 "traddr": "10.0.0.1", 00:15:45.686 "trsvcid": "44206" 00:15:45.686 }, 00:15:45.686 "auth": { 00:15:45.686 "state": "completed", 00:15:45.686 "digest": "sha512", 00:15:45.686 "dhgroup": "ffdhe3072" 00:15:45.686 } 00:15:45.686 } 00:15:45.686 ]' 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.686 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.943 10:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.873 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.131 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.695 00:15:47.695 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.695 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.695 10:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.695 { 00:15:47.695 "cntlid": 115, 00:15:47.695 "qid": 0, 00:15:47.695 "state": "enabled", 00:15:47.695 "thread": "nvmf_tgt_poll_group_000", 00:15:47.695 "listen_address": { 00:15:47.695 "trtype": "TCP", 00:15:47.695 "adrfam": "IPv4", 00:15:47.695 "traddr": "10.0.0.2", 00:15:47.695 "trsvcid": "4420" 00:15:47.695 }, 00:15:47.695 "peer_address": { 00:15:47.695 "trtype": "TCP", 00:15:47.695 "adrfam": "IPv4", 00:15:47.695 "traddr": "10.0.0.1", 00:15:47.695 "trsvcid": "44230" 00:15:47.695 }, 00:15:47.695 "auth": { 00:15:47.695 "state": "completed", 00:15:47.695 "digest": "sha512", 00:15:47.695 "dhgroup": "ffdhe3072" 00:15:47.695 } 00:15:47.695 } 00:15:47.695 ]' 00:15:47.695 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.953 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.211 10:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.142 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.401 10:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.660 00:15:49.660 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.660 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.660 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.919 { 00:15:49.919 "cntlid": 117, 00:15:49.919 "qid": 0, 00:15:49.919 "state": "enabled", 00:15:49.919 "thread": "nvmf_tgt_poll_group_000", 00:15:49.919 "listen_address": { 00:15:49.919 "trtype": "TCP", 00:15:49.919 "adrfam": "IPv4", 00:15:49.919 "traddr": "10.0.0.2", 00:15:49.919 "trsvcid": "4420" 00:15:49.919 }, 00:15:49.919 "peer_address": { 00:15:49.919 "trtype": "TCP", 00:15:49.919 "adrfam": "IPv4", 00:15:49.919 "traddr": "10.0.0.1", 00:15:49.919 "trsvcid": "57752" 00:15:49.919 }, 00:15:49.919 "auth": { 00:15:49.919 "state": "completed", 00:15:49.919 "digest": "sha512", 00:15:49.919 "dhgroup": "ffdhe3072" 00:15:49.919 } 00:15:49.919 } 00:15:49.919 ]' 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.919 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.177 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.177 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.177 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.177 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.177 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.177 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.433 10:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.365 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.622 10:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.878 00:15:51.878 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.878 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.878 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.135 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.135 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.135 10:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.136 { 00:15:52.136 "cntlid": 119, 00:15:52.136 "qid": 0, 00:15:52.136 "state": "enabled", 00:15:52.136 "thread": "nvmf_tgt_poll_group_000", 00:15:52.136 "listen_address": { 00:15:52.136 "trtype": "TCP", 00:15:52.136 "adrfam": "IPv4", 00:15:52.136 "traddr": "10.0.0.2", 00:15:52.136 "trsvcid": "4420" 00:15:52.136 }, 00:15:52.136 "peer_address": { 00:15:52.136 "trtype": "TCP", 00:15:52.136 "adrfam": "IPv4", 00:15:52.136 "traddr": "10.0.0.1", 00:15:52.136 "trsvcid": "57784" 00:15:52.136 }, 00:15:52.136 "auth": { 00:15:52.136 "state": "completed", 00:15:52.136 "digest": "sha512", 00:15:52.136 "dhgroup": "ffdhe3072" 00:15:52.136 } 00:15:52.136 } 00:15:52.136 ]' 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.136 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.392 10:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.321 10:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.577 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.138 00:15:54.138 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.138 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.138 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.395 { 00:15:54.395 "cntlid": 121, 00:15:54.395 "qid": 0, 00:15:54.395 "state": "enabled", 00:15:54.395 "thread": "nvmf_tgt_poll_group_000", 00:15:54.395 "listen_address": { 00:15:54.395 "trtype": "TCP", 00:15:54.395 "adrfam": "IPv4", 00:15:54.395 "traddr": "10.0.0.2", 00:15:54.395 "trsvcid": "4420" 00:15:54.395 }, 00:15:54.395 "peer_address": { 00:15:54.395 "trtype": "TCP", 00:15:54.395 "adrfam": "IPv4", 00:15:54.395 "traddr": "10.0.0.1", 00:15:54.395 "trsvcid": "57814" 00:15:54.395 }, 00:15:54.395 "auth": { 00:15:54.395 "state": "completed", 00:15:54.395 "digest": "sha512", 00:15:54.395 "dhgroup": "ffdhe4096" 00:15:54.395 } 00:15:54.395 } 00:15:54.395 ]' 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.395 10:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.651 10:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.581 10:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.839 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.097 00:15:56.097 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.097 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.097 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.354 { 00:15:56.354 "cntlid": 123, 00:15:56.354 "qid": 0, 00:15:56.354 "state": "enabled", 00:15:56.354 "thread": "nvmf_tgt_poll_group_000", 00:15:56.354 "listen_address": { 00:15:56.354 "trtype": "TCP", 00:15:56.354 "adrfam": "IPv4", 00:15:56.354 "traddr": "10.0.0.2", 00:15:56.354 "trsvcid": "4420" 00:15:56.354 }, 00:15:56.354 "peer_address": { 00:15:56.354 "trtype": "TCP", 00:15:56.354 "adrfam": "IPv4", 00:15:56.354 "traddr": "10.0.0.1", 00:15:56.354 "trsvcid": "57838" 00:15:56.354 }, 00:15:56.354 "auth": { 00:15:56.354 "state": "completed", 00:15:56.354 "digest": "sha512", 00:15:56.354 "dhgroup": "ffdhe4096" 00:15:56.354 } 00:15:56.354 } 00:15:56.354 ]' 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.354 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.611 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.611 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.611 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.611 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.611 10:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.868 10:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.907 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.471 00:15:58.471 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.471 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.471 10:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.471 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.471 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.471 10:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.471 10:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.727 { 00:15:58.727 "cntlid": 125, 00:15:58.727 "qid": 0, 00:15:58.727 "state": "enabled", 00:15:58.727 "thread": "nvmf_tgt_poll_group_000", 00:15:58.727 "listen_address": { 00:15:58.727 "trtype": "TCP", 00:15:58.727 "adrfam": "IPv4", 00:15:58.727 "traddr": "10.0.0.2", 00:15:58.727 "trsvcid": "4420" 00:15:58.727 }, 00:15:58.727 "peer_address": { 00:15:58.727 "trtype": "TCP", 00:15:58.727 "adrfam": "IPv4", 00:15:58.727 "traddr": "10.0.0.1", 00:15:58.727 "trsvcid": "48336" 00:15:58.727 }, 00:15:58.727 "auth": { 00:15:58.727 "state": "completed", 00:15:58.727 "digest": "sha512", 00:15:58.727 "dhgroup": "ffdhe4096" 00:15:58.727 } 00:15:58.727 } 00:15:58.727 ]' 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.727 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.984 10:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:59.916 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.173 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.174 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.431 00:16:00.431 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.431 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.431 10:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.688 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.688 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.688 10:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.688 10:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.688 10:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.688 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.688 { 00:16:00.688 "cntlid": 127, 00:16:00.688 "qid": 0, 00:16:00.688 "state": "enabled", 00:16:00.688 "thread": "nvmf_tgt_poll_group_000", 00:16:00.688 "listen_address": { 00:16:00.688 "trtype": "TCP", 00:16:00.688 "adrfam": "IPv4", 00:16:00.688 "traddr": "10.0.0.2", 00:16:00.688 "trsvcid": "4420" 00:16:00.688 }, 00:16:00.688 "peer_address": { 00:16:00.688 "trtype": "TCP", 00:16:00.688 "adrfam": "IPv4", 00:16:00.688 "traddr": "10.0.0.1", 00:16:00.688 "trsvcid": "48360" 00:16:00.688 }, 00:16:00.688 "auth": { 00:16:00.688 "state": "completed", 00:16:00.688 "digest": "sha512", 00:16:00.688 "dhgroup": "ffdhe4096" 00:16:00.688 } 00:16:00.688 } 00:16:00.688 ]' 00:16:00.689 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.946 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.203 10:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:16:02.136 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.136 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.137 10:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.394 10:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.394 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.394 10:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.651 00:16:02.908 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.908 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.908 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.164 { 00:16:03.164 "cntlid": 129, 00:16:03.164 "qid": 0, 00:16:03.164 "state": "enabled", 00:16:03.164 "thread": "nvmf_tgt_poll_group_000", 00:16:03.164 "listen_address": { 00:16:03.164 "trtype": "TCP", 00:16:03.164 "adrfam": "IPv4", 00:16:03.164 "traddr": "10.0.0.2", 00:16:03.164 "trsvcid": "4420" 00:16:03.164 }, 00:16:03.164 "peer_address": { 00:16:03.164 "trtype": "TCP", 00:16:03.164 "adrfam": "IPv4", 00:16:03.164 "traddr": "10.0.0.1", 00:16:03.164 "trsvcid": "48378" 00:16:03.164 }, 00:16:03.164 "auth": { 00:16:03.164 "state": "completed", 00:16:03.164 "digest": "sha512", 00:16:03.164 "dhgroup": "ffdhe6144" 00:16:03.164 } 00:16:03.164 } 00:16:03.164 ]' 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.164 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.422 10:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:04.355 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:04.612 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:04.612 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.612 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.612 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:04.612 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.613 10:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.179 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.179 10:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.437 { 00:16:05.437 "cntlid": 131, 00:16:05.437 "qid": 0, 00:16:05.437 "state": "enabled", 00:16:05.437 "thread": "nvmf_tgt_poll_group_000", 00:16:05.437 "listen_address": { 00:16:05.437 "trtype": "TCP", 00:16:05.437 "adrfam": "IPv4", 00:16:05.437 "traddr": "10.0.0.2", 00:16:05.437 "trsvcid": "4420" 00:16:05.437 }, 00:16:05.437 "peer_address": { 00:16:05.437 "trtype": "TCP", 00:16:05.437 "adrfam": "IPv4", 00:16:05.437 "traddr": "10.0.0.1", 00:16:05.437 "trsvcid": "48398" 00:16:05.437 }, 00:16:05.437 "auth": { 00:16:05.437 "state": "completed", 00:16:05.437 "digest": "sha512", 00:16:05.437 "dhgroup": "ffdhe6144" 00:16:05.437 } 00:16:05.437 } 00:16:05.437 ]' 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.437 10:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.728 10:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.684 10:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.942 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.508 00:16:07.508 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.508 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.508 10:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.508 { 00:16:07.508 "cntlid": 133, 00:16:07.508 "qid": 0, 00:16:07.508 "state": "enabled", 00:16:07.508 "thread": "nvmf_tgt_poll_group_000", 00:16:07.508 "listen_address": { 00:16:07.508 "trtype": "TCP", 00:16:07.508 "adrfam": "IPv4", 00:16:07.508 "traddr": "10.0.0.2", 00:16:07.508 "trsvcid": "4420" 00:16:07.508 }, 00:16:07.508 "peer_address": { 00:16:07.508 "trtype": "TCP", 00:16:07.508 "adrfam": "IPv4", 00:16:07.508 "traddr": "10.0.0.1", 00:16:07.508 "trsvcid": "48412" 00:16:07.508 }, 00:16:07.508 "auth": { 00:16:07.508 "state": "completed", 00:16:07.508 "digest": "sha512", 00:16:07.508 "dhgroup": "ffdhe6144" 00:16:07.508 } 00:16:07.508 } 00:16:07.508 ]' 00:16:07.508 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.766 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.025 10:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.959 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.217 10:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.781 00:16:09.781 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.781 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.781 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.039 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.039 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.039 10:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.039 10:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.039 10:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.039 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.039 { 00:16:10.039 "cntlid": 135, 00:16:10.039 "qid": 0, 00:16:10.039 "state": "enabled", 00:16:10.039 "thread": "nvmf_tgt_poll_group_000", 00:16:10.039 "listen_address": { 00:16:10.039 "trtype": "TCP", 00:16:10.039 "adrfam": "IPv4", 00:16:10.040 "traddr": "10.0.0.2", 00:16:10.040 "trsvcid": "4420" 00:16:10.040 }, 00:16:10.040 "peer_address": { 00:16:10.040 "trtype": "TCP", 00:16:10.040 "adrfam": "IPv4", 00:16:10.040 "traddr": "10.0.0.1", 00:16:10.040 "trsvcid": "42616" 00:16:10.040 }, 00:16:10.040 "auth": { 00:16:10.040 "state": "completed", 00:16:10.040 "digest": "sha512", 00:16:10.040 "dhgroup": "ffdhe6144" 00:16:10.040 } 00:16:10.040 } 00:16:10.040 ]' 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.040 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.298 10:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.232 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.489 10:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.423 00:16:12.423 10:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.423 10:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.423 10:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.680 10:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.680 10:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.680 10:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.680 10:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.680 { 00:16:12.680 "cntlid": 137, 00:16:12.680 "qid": 0, 00:16:12.680 "state": "enabled", 00:16:12.680 "thread": "nvmf_tgt_poll_group_000", 00:16:12.680 "listen_address": { 00:16:12.680 "trtype": "TCP", 00:16:12.680 "adrfam": "IPv4", 00:16:12.680 "traddr": "10.0.0.2", 00:16:12.680 "trsvcid": "4420" 00:16:12.680 }, 00:16:12.680 "peer_address": { 00:16:12.680 "trtype": "TCP", 00:16:12.680 "adrfam": "IPv4", 00:16:12.680 "traddr": "10.0.0.1", 00:16:12.680 "trsvcid": "42648" 00:16:12.680 }, 00:16:12.680 "auth": { 00:16:12.680 "state": "completed", 00:16:12.680 "digest": "sha512", 00:16:12.680 "dhgroup": "ffdhe8192" 00:16:12.680 } 00:16:12.680 } 00:16:12.680 ]' 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.680 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.939 10:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.871 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.128 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.129 10:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.129 10:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.129 10:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.129 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.129 10:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.060 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.060 { 00:16:15.060 "cntlid": 139, 00:16:15.060 "qid": 0, 00:16:15.060 "state": "enabled", 00:16:15.060 "thread": "nvmf_tgt_poll_group_000", 00:16:15.060 "listen_address": { 00:16:15.060 "trtype": "TCP", 00:16:15.060 "adrfam": "IPv4", 00:16:15.060 "traddr": "10.0.0.2", 00:16:15.060 "trsvcid": "4420" 00:16:15.060 }, 00:16:15.060 "peer_address": { 00:16:15.060 "trtype": "TCP", 00:16:15.060 "adrfam": "IPv4", 00:16:15.060 "traddr": "10.0.0.1", 00:16:15.060 "trsvcid": "42670" 00:16:15.060 }, 00:16:15.060 "auth": { 00:16:15.060 "state": "completed", 00:16:15.060 "digest": "sha512", 00:16:15.060 "dhgroup": "ffdhe8192" 00:16:15.060 } 00:16:15.060 } 00:16:15.060 ]' 00:16:15.060 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.317 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.574 10:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NTVlMTBhZDdjMTY3NDQ1OTRjODBkODE5Y2IxOTQ4ZDV1eDFQ: --dhchap-ctrl-secret DHHC-1:02:Y2Y5MTU4OWRhMjgzODQxYWJkNzRmMzk0MWQxMzFmM2IwNTM5Njc1ZDUwMWRjYmNh3gWnkg==: 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.506 10:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.763 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.697 00:16:17.697 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.697 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.697 10:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.697 { 00:16:17.697 "cntlid": 141, 00:16:17.697 "qid": 0, 00:16:17.697 "state": "enabled", 00:16:17.697 "thread": "nvmf_tgt_poll_group_000", 00:16:17.697 "listen_address": { 00:16:17.697 "trtype": "TCP", 00:16:17.697 "adrfam": "IPv4", 00:16:17.697 "traddr": "10.0.0.2", 00:16:17.697 "trsvcid": "4420" 00:16:17.697 }, 00:16:17.697 "peer_address": { 00:16:17.697 "trtype": "TCP", 00:16:17.697 "adrfam": "IPv4", 00:16:17.697 "traddr": "10.0.0.1", 00:16:17.697 "trsvcid": "42688" 00:16:17.697 }, 00:16:17.697 "auth": { 00:16:17.697 "state": "completed", 00:16:17.697 "digest": "sha512", 00:16:17.697 "dhgroup": "ffdhe8192" 00:16:17.697 } 00:16:17.697 } 00:16:17.697 ]' 00:16:17.697 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.955 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.212 10:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MDk1ZjVjMzE3NmQ3OTU5ZjQ5MWZjZjMyZDM4MWY2ZmE0YWY2ZmMwZWM0NDAxNjlhUhq6Hg==: --dhchap-ctrl-secret DHHC-1:01:OGQ0MjE1MmExM2Q5NTc0N2QxNWUxMzY3MWNiOTg0ODAeCGYN: 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.145 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.403 10:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.336 00:16:20.336 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.336 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.336 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.594 { 00:16:20.594 "cntlid": 143, 00:16:20.594 "qid": 0, 00:16:20.594 "state": "enabled", 00:16:20.594 "thread": "nvmf_tgt_poll_group_000", 00:16:20.594 "listen_address": { 00:16:20.594 "trtype": "TCP", 00:16:20.594 "adrfam": "IPv4", 00:16:20.594 "traddr": "10.0.0.2", 00:16:20.594 "trsvcid": "4420" 00:16:20.594 }, 00:16:20.594 "peer_address": { 00:16:20.594 "trtype": "TCP", 00:16:20.594 "adrfam": "IPv4", 00:16:20.594 "traddr": "10.0.0.1", 00:16:20.594 "trsvcid": "54082" 00:16:20.594 }, 00:16:20.594 "auth": { 00:16:20.594 "state": "completed", 00:16:20.594 "digest": "sha512", 00:16:20.594 "dhgroup": "ffdhe8192" 00:16:20.594 } 00:16:20.594 } 00:16:20.594 ]' 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.594 10:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.594 10:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.594 10:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.594 10:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.851 10:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.784 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.042 10:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.974 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.974 { 00:16:22.974 "cntlid": 145, 00:16:22.974 "qid": 0, 00:16:22.974 "state": "enabled", 00:16:22.974 "thread": "nvmf_tgt_poll_group_000", 00:16:22.974 "listen_address": { 00:16:22.974 "trtype": "TCP", 00:16:22.974 "adrfam": "IPv4", 00:16:22.974 "traddr": "10.0.0.2", 00:16:22.974 "trsvcid": "4420" 00:16:22.974 }, 00:16:22.974 "peer_address": { 00:16:22.974 "trtype": "TCP", 00:16:22.974 "adrfam": "IPv4", 00:16:22.974 "traddr": "10.0.0.1", 00:16:22.974 "trsvcid": "54114" 00:16:22.974 }, 00:16:22.974 "auth": { 00:16:22.974 "state": "completed", 00:16:22.974 "digest": "sha512", 00:16:22.974 "dhgroup": "ffdhe8192" 00:16:22.974 } 00:16:22.974 } 00:16:22.974 ]' 00:16:22.974 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.232 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.490 10:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTc4NTUxZGZlMTVjMzZmZjM3M2UzOThjYmYxN2VkNmM3MGQ1NmRhNTViMTQ5NjMw79XBaA==: --dhchap-ctrl-secret DHHC-1:03:ZWZkNTQ1OTRlMGUwNjE0NDcwMzU2YjYyZmVkZWEyYzUxMWI5Yjg0M2UwMGQyNDE1NzQ5Y2QyZTc1NWUxZWU3Nju7jY4=: 00:16:24.423 10:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:24.424 10:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:24.989 request: 00:16:24.989 { 00:16:24.989 "name": "nvme0", 00:16:24.989 "trtype": "tcp", 00:16:24.989 "traddr": "10.0.0.2", 00:16:24.989 "adrfam": "ipv4", 00:16:24.989 "trsvcid": "4420", 00:16:24.989 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:24.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:24.989 "prchk_reftag": false, 00:16:24.989 "prchk_guard": false, 00:16:24.989 "hdgst": false, 00:16:24.989 "ddgst": false, 00:16:24.989 "dhchap_key": "key2", 00:16:24.989 "method": "bdev_nvme_attach_controller", 00:16:24.989 "req_id": 1 00:16:24.989 } 00:16:24.989 Got JSON-RPC error response 00:16:24.989 response: 00:16:24.989 { 00:16:24.989 "code": -5, 00:16:24.989 "message": "Input/output error" 00:16:24.989 } 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:24.989 10:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:25.919 request: 00:16:25.919 { 00:16:25.919 "name": "nvme0", 00:16:25.919 "trtype": "tcp", 00:16:25.919 "traddr": "10.0.0.2", 00:16:25.919 "adrfam": "ipv4", 00:16:25.919 "trsvcid": "4420", 00:16:25.919 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:25.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:25.919 "prchk_reftag": false, 00:16:25.919 "prchk_guard": false, 00:16:25.919 "hdgst": false, 00:16:25.919 "ddgst": false, 00:16:25.919 "dhchap_key": "key1", 00:16:25.919 "dhchap_ctrlr_key": "ckey2", 00:16:25.919 "method": "bdev_nvme_attach_controller", 00:16:25.919 "req_id": 1 00:16:25.919 } 00:16:25.919 Got JSON-RPC error response 00:16:25.919 response: 00:16:25.919 { 00:16:25.919 "code": -5, 00:16:25.919 "message": "Input/output error" 00:16:25.919 } 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 10:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.855 request: 00:16:26.855 { 00:16:26.855 "name": "nvme0", 00:16:26.855 "trtype": "tcp", 00:16:26.855 "traddr": "10.0.0.2", 00:16:26.855 "adrfam": "ipv4", 00:16:26.855 "trsvcid": "4420", 00:16:26.855 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:26.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.855 "prchk_reftag": false, 00:16:26.855 "prchk_guard": false, 00:16:26.855 "hdgst": false, 00:16:26.855 "ddgst": false, 00:16:26.855 "dhchap_key": "key1", 00:16:26.855 "dhchap_ctrlr_key": "ckey1", 00:16:26.855 "method": "bdev_nvme_attach_controller", 00:16:26.855 "req_id": 1 00:16:26.855 } 00:16:26.855 Got JSON-RPC error response 00:16:26.856 response: 00:16:26.856 { 00:16:26.856 "code": -5, 00:16:26.856 "message": "Input/output error" 00:16:26.856 } 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1191787 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1191787 ']' 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1191787 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1191787 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1191787' 00:16:26.856 killing process with pid 1191787 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1191787 00:16:26.856 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1191787 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1213526 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1213526 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1213526 ']' 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.113 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1213526 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1213526 ']' 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.370 10:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.628 10:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.611 00:16:28.611 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.611 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.611 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.869 { 00:16:28.869 "cntlid": 1, 00:16:28.869 "qid": 0, 00:16:28.869 "state": "enabled", 00:16:28.869 "thread": "nvmf_tgt_poll_group_000", 00:16:28.869 "listen_address": { 00:16:28.869 "trtype": "TCP", 00:16:28.869 "adrfam": "IPv4", 00:16:28.869 "traddr": "10.0.0.2", 00:16:28.869 "trsvcid": "4420" 00:16:28.869 }, 00:16:28.869 "peer_address": { 00:16:28.869 "trtype": "TCP", 00:16:28.869 "adrfam": "IPv4", 00:16:28.869 "traddr": "10.0.0.1", 00:16:28.869 "trsvcid": "42458" 00:16:28.869 }, 00:16:28.869 "auth": { 00:16:28.869 "state": "completed", 00:16:28.869 "digest": "sha512", 00:16:28.869 "dhgroup": "ffdhe8192" 00:16:28.869 } 00:16:28.869 } 00:16:28.869 ]' 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.869 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.127 10:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YmEwZGQ5ODBlNmNiN2FmYzhmNTViMDFlZjg4NzVhNmZhNjg1ZWI2MTYxNmUzOThiMGFmNWQyZWRiMjM2YmI1N8K9YwU=: 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:30.062 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.626 10:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.626 request: 00:16:30.626 { 00:16:30.626 "name": "nvme0", 00:16:30.626 "trtype": "tcp", 00:16:30.626 "traddr": "10.0.0.2", 00:16:30.626 "adrfam": "ipv4", 00:16:30.626 "trsvcid": "4420", 00:16:30.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:30.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:30.626 "prchk_reftag": false, 00:16:30.626 "prchk_guard": false, 00:16:30.626 "hdgst": false, 00:16:30.626 "ddgst": false, 00:16:30.626 "dhchap_key": "key3", 00:16:30.626 "method": "bdev_nvme_attach_controller", 00:16:30.626 "req_id": 1 00:16:30.626 } 00:16:30.626 Got JSON-RPC error response 00:16:30.626 response: 00:16:30.626 { 00:16:30.626 "code": -5, 00:16:30.626 "message": "Input/output error" 00:16:30.626 } 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:30.884 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:31.141 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.141 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.142 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.399 request: 00:16:31.399 { 00:16:31.399 "name": "nvme0", 00:16:31.399 "trtype": "tcp", 00:16:31.399 "traddr": "10.0.0.2", 00:16:31.399 "adrfam": "ipv4", 00:16:31.399 "trsvcid": "4420", 00:16:31.399 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:31.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.399 "prchk_reftag": false, 00:16:31.399 "prchk_guard": false, 00:16:31.399 "hdgst": false, 00:16:31.399 "ddgst": false, 00:16:31.399 "dhchap_key": "key3", 00:16:31.399 "method": "bdev_nvme_attach_controller", 00:16:31.399 "req_id": 1 00:16:31.399 } 00:16:31.399 Got JSON-RPC error response 00:16:31.399 response: 00:16:31.399 { 00:16:31.399 "code": -5, 00:16:31.399 "message": "Input/output error" 00:16:31.399 } 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.399 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:31.657 10:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:31.914 request: 00:16:31.914 { 00:16:31.914 "name": "nvme0", 00:16:31.914 "trtype": "tcp", 00:16:31.914 "traddr": "10.0.0.2", 00:16:31.914 "adrfam": "ipv4", 00:16:31.914 "trsvcid": "4420", 00:16:31.914 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:31.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.914 "prchk_reftag": false, 00:16:31.914 "prchk_guard": false, 00:16:31.914 "hdgst": false, 00:16:31.914 "ddgst": false, 00:16:31.914 "dhchap_key": "key0", 00:16:31.914 "dhchap_ctrlr_key": "key1", 00:16:31.914 "method": "bdev_nvme_attach_controller", 00:16:31.914 "req_id": 1 00:16:31.914 } 00:16:31.914 Got JSON-RPC error response 00:16:31.914 response: 00:16:31.914 { 00:16:31.914 "code": -5, 00:16:31.914 "message": "Input/output error" 00:16:31.914 } 00:16:31.914 10:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:31.914 10:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.914 10:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.914 10:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.914 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:31.914 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:32.171 00:16:32.171 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:32.171 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.171 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:32.429 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.429 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.429 10:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1191919 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1191919 ']' 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1191919 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1191919 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1191919' 00:16:32.686 killing process with pid 1191919 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1191919 00:16:32.686 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1191919 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.253 rmmod nvme_tcp 00:16:33.253 rmmod nvme_fabrics 00:16:33.253 rmmod nvme_keyring 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1213526 ']' 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1213526 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1213526 ']' 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1213526 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1213526 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1213526' 00:16:33.253 killing process with pid 1213526 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1213526 00:16:33.253 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1213526 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.512 10:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.416 10:33:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:35.416 10:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CgU /tmp/spdk.key-sha256.6xW /tmp/spdk.key-sha384.4bd /tmp/spdk.key-sha512.Fgq /tmp/spdk.key-sha512.V03 /tmp/spdk.key-sha384.NdH /tmp/spdk.key-sha256.7yd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:35.416 00:16:35.416 real 3m0.394s 00:16:35.416 user 7m2.004s 00:16:35.416 sys 0m25.035s 00:16:35.416 10:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.416 10:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.416 ************************************ 00:16:35.416 END TEST nvmf_auth_target 00:16:35.416 ************************************ 00:16:35.416 10:33:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:35.416 10:33:23 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:35.416 10:33:23 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:35.416 10:33:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:35.416 10:33:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.416 10:33:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:35.416 ************************************ 00:16:35.416 START TEST nvmf_bdevio_no_huge 00:16:35.416 ************************************ 00:16:35.416 10:33:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:35.675 * Looking for test storage... 00:16:35.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.675 10:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:38.204 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:38.204 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.204 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:38.205 Found net devices under 0000:09:00.0: cvl_0_0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:38.205 Found net devices under 0000:09:00.1: cvl_0_1 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:16:38.205 00:16:38.205 --- 10.0.0.2 ping statistics --- 00:16:38.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.205 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:16:38.205 00:16:38.205 --- 10.0.0.1 ping statistics --- 00:16:38.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.205 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1216185 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1216185 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1216185 ']' 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 [2024-07-15 10:33:26.350221] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:38.205 [2024-07-15 10:33:26.350295] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:38.205 [2024-07-15 10:33:26.420270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.205 [2024-07-15 10:33:26.524559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.205 [2024-07-15 10:33:26.524608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.205 [2024-07-15 10:33:26.524632] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.205 [2024-07-15 10:33:26.524643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.205 [2024-07-15 10:33:26.524653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.205 [2024-07-15 10:33:26.524762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:38.205 [2024-07-15 10:33:26.524888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:38.205 [2024-07-15 10:33:26.524938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:38.205 [2024-07-15 10:33:26.524942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 [2024-07-15 10:33:26.648009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 Malloc0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:38.205 [2024-07-15 10:33:26.686316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.205 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:38.206 { 00:16:38.206 "params": { 00:16:38.206 "name": "Nvme$subsystem", 00:16:38.206 "trtype": "$TEST_TRANSPORT", 00:16:38.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:38.206 "adrfam": "ipv4", 00:16:38.206 "trsvcid": "$NVMF_PORT", 00:16:38.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:38.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:38.206 "hdgst": ${hdgst:-false}, 00:16:38.206 "ddgst": ${ddgst:-false} 00:16:38.206 }, 00:16:38.206 "method": "bdev_nvme_attach_controller" 00:16:38.206 } 00:16:38.206 EOF 00:16:38.206 )") 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:38.206 10:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:38.206 "params": { 00:16:38.206 "name": "Nvme1", 00:16:38.206 "trtype": "tcp", 00:16:38.206 "traddr": "10.0.0.2", 00:16:38.206 "adrfam": "ipv4", 00:16:38.206 "trsvcid": "4420", 00:16:38.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:38.206 "hdgst": false, 00:16:38.206 "ddgst": false 00:16:38.206 }, 00:16:38.206 "method": "bdev_nvme_attach_controller" 00:16:38.206 }' 00:16:38.206 [2024-07-15 10:33:26.733910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:38.206 [2024-07-15 10:33:26.733983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1216326 ] 00:16:38.462 [2024-07-15 10:33:26.797411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:38.462 [2024-07-15 10:33:26.911782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.462 [2024-07-15 10:33:26.911829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.462 [2024-07-15 10:33:26.911833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.719 I/O targets: 00:16:38.719 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:38.719 00:16:38.719 00:16:38.719 CUnit - A unit testing framework for C - Version 2.1-3 00:16:38.719 http://cunit.sourceforge.net/ 00:16:38.719 00:16:38.719 00:16:38.719 Suite: bdevio tests on: Nvme1n1 00:16:38.719 Test: blockdev write read block ...passed 00:16:38.719 Test: blockdev write zeroes read block ...passed 00:16:38.719 Test: blockdev write zeroes read no split ...passed 00:16:38.719 Test: blockdev write zeroes read split ...passed 00:16:38.719 Test: blockdev write zeroes read split partial ...passed 00:16:38.719 Test: blockdev reset ...[2024-07-15 10:33:27.229939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:38.719 [2024-07-15 10:33:27.230061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a2fb0 (9): Bad file descriptor 00:16:38.719 [2024-07-15 10:33:27.250217] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:38.719 passed 00:16:38.719 Test: blockdev write read 8 blocks ...passed 00:16:38.719 Test: blockdev write read size > 128k ...passed 00:16:38.719 Test: blockdev write read invalid size ...passed 00:16:38.976 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:38.976 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:38.976 Test: blockdev write read max offset ...passed 00:16:38.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:38.976 Test: blockdev writev readv 8 blocks ...passed 00:16:38.976 Test: blockdev writev readv 30 x 1block ...passed 00:16:38.976 Test: blockdev writev readv block ...passed 00:16:38.976 Test: blockdev writev readv size > 128k ...passed 00:16:38.976 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:38.976 Test: blockdev comparev and writev ...[2024-07-15 10:33:27.503834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.503870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.503895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.503912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.504220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.504245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.504268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.504284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.504589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.504614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.504636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.504652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.504974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.504999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:38.976 [2024-07-15 10:33:27.505021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.976 [2024-07-15 10:33:27.505036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:39.234 passed 00:16:39.234 Test: blockdev nvme passthru rw ...passed 00:16:39.234 Test: blockdev nvme passthru vendor specific ...[2024-07-15 10:33:27.589069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:39.234 [2024-07-15 10:33:27.589098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:39.234 [2024-07-15 10:33:27.589245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:39.234 [2024-07-15 10:33:27.589268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:39.234 [2024-07-15 10:33:27.589407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:39.234 [2024-07-15 10:33:27.589430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:39.234 [2024-07-15 10:33:27.589573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:39.234 [2024-07-15 10:33:27.589596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:39.234 passed 00:16:39.234 Test: blockdev nvme admin passthru ...passed 00:16:39.234 Test: blockdev copy ...passed 00:16:39.234 00:16:39.234 Run Summary: Type Total Ran Passed Failed Inactive 00:16:39.234 suites 1 1 n/a 0 0 00:16:39.234 tests 23 23 23 0 0 00:16:39.234 asserts 152 152 152 0 n/a 00:16:39.234 00:16:39.234 Elapsed time = 1.059 seconds 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.491 10:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.491 rmmod nvme_tcp 00:16:39.491 rmmod nvme_fabrics 00:16:39.491 rmmod nvme_keyring 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1216185 ']' 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1216185 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1216185 ']' 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1216185 00:16:39.491 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:16:39.748 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.748 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1216185 00:16:39.748 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:39.748 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:39.748 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1216185' 00:16:39.749 killing process with pid 1216185 00:16:39.749 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1216185 00:16:39.749 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1216185 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.006 10:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.538 10:33:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:42.538 00:16:42.538 real 0m6.525s 00:16:42.538 user 0m10.209s 00:16:42.538 sys 0m2.510s 00:16:42.538 10:33:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.538 10:33:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:42.538 ************************************ 00:16:42.538 END TEST nvmf_bdevio_no_huge 00:16:42.538 ************************************ 00:16:42.538 10:33:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.538 10:33:30 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:42.538 10:33:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:42.538 10:33:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.538 10:33:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.538 ************************************ 00:16:42.538 START TEST nvmf_tls 00:16:42.538 ************************************ 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:42.538 * Looking for test storage... 00:16:42.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.538 10:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.442 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:44.443 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:44.443 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:44.443 Found net devices under 0000:09:00.0: cvl_0_0 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:44.443 Found net devices under 0000:09:00.1: cvl_0_1 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:44.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:16:44.443 00:16:44.443 --- 10.0.0.2 ping statistics --- 00:16:44.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.443 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:16:44.443 00:16:44.443 --- 10.0.0.1 ping statistics --- 00:16:44.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.443 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1218397 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1218397 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1218397 ']' 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.443 10:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.443 [2024-07-15 10:33:32.940071] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:44.443 [2024-07-15 10:33:32.940169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.443 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.702 [2024-07-15 10:33:33.005242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.702 [2024-07-15 10:33:33.110526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.702 [2024-07-15 10:33:33.110593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.702 [2024-07-15 10:33:33.110616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.702 [2024-07-15 10:33:33.110627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.702 [2024-07-15 10:33:33.110636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.702 [2024-07-15 10:33:33.110661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:44.702 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:44.959 true 00:16:44.959 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.959 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:45.217 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:45.217 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:45.217 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:45.475 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.475 10:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:45.732 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:45.732 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:45.732 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:45.989 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.989 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:46.246 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:46.246 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:46.246 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.246 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:46.504 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:46.504 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:46.504 10:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:46.761 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.761 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:47.018 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:47.018 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:47.018 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:47.276 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:47.276 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:47.534 10:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.eEfGYfgxIe 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.V7EwLmJ7YP 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.eEfGYfgxIe 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.V7EwLmJ7YP 00:16:47.534 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:47.792 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:48.357 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.eEfGYfgxIe 00:16:48.357 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eEfGYfgxIe 00:16:48.357 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:48.615 [2024-07-15 10:33:36.962945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.615 10:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:48.871 10:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:49.128 [2024-07-15 10:33:37.532436] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.129 [2024-07-15 10:33:37.532642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.129 10:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:49.387 malloc0 00:16:49.387 10:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:49.644 10:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eEfGYfgxIe 00:16:49.902 [2024-07-15 10:33:38.320896] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:49.902 10:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.eEfGYfgxIe 00:16:49.902 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.955 Initializing NVMe Controllers 00:16:59.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:59.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:59.955 Initialization complete. Launching workers. 00:16:59.955 ======================================================== 00:16:59.955 Latency(us) 00:16:59.955 Device Information : IOPS MiB/s Average min max 00:16:59.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8769.30 34.26 7300.10 1094.33 9601.59 00:16:59.955 ======================================================== 00:16:59.955 Total : 8769.30 34.26 7300.10 1094.33 9601.59 00:16:59.955 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eEfGYfgxIe 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eEfGYfgxIe' 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1220291 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1220291 /var/tmp/bdevperf.sock 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1220291 ']' 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.955 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.955 [2024-07-15 10:33:48.489010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:59.955 [2024-07-15 10:33:48.489117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220291 ] 00:17:00.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.213 [2024-07-15 10:33:48.547097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.213 [2024-07-15 10:33:48.651914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.213 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.213 10:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:00.213 10:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eEfGYfgxIe 00:17:00.470 [2024-07-15 10:33:48.998693] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.470 [2024-07-15 10:33:48.998842] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:00.727 TLSTESTn1 00:17:00.727 10:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:00.727 Running I/O for 10 seconds... 00:17:10.683 00:17:10.683 Latency(us) 00:17:10.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.683 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.683 Verification LBA range: start 0x0 length 0x2000 00:17:10.683 TLSTESTn1 : 10.02 3539.61 13.83 0.00 0.00 36099.09 9272.13 33010.73 00:17:10.683 =================================================================================================================== 00:17:10.683 Total : 3539.61 13.83 0.00 0.00 36099.09 9272.13 33010.73 00:17:10.683 0 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1220291 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1220291 ']' 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1220291 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220291 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220291' 00:17:10.940 killing process with pid 1220291 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1220291 00:17:10.940 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.940 00:17:10.940 Latency(us) 00:17:10.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.940 =================================================================================================================== 00:17:10.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.940 [2024-07-15 10:33:59.268611] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:10.940 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1220291 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V7EwLmJ7YP 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V7EwLmJ7YP 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V7EwLmJ7YP 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.V7EwLmJ7YP' 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1221507 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1221507 /var/tmp/bdevperf.sock 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1221507 ']' 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.198 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.198 [2024-07-15 10:33:59.578987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:11.198 [2024-07-15 10:33:59.579084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221507 ] 00:17:11.198 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.198 [2024-07-15 10:33:59.640588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.198 [2024-07-15 10:33:59.746965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.457 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.457 10:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:11.458 10:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V7EwLmJ7YP 00:17:11.715 [2024-07-15 10:34:00.082526] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.715 [2024-07-15 10:34:00.082673] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:11.715 [2024-07-15 10:34:00.088717] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.715 [2024-07-15 10:34:00.089519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x883f90 (107): Transport endpoint is not connected 00:17:11.715 [2024-07-15 10:34:00.090510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x883f90 (9): Bad file descriptor 00:17:11.715 [2024-07-15 10:34:00.091510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.715 [2024-07-15 10:34:00.091530] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.715 [2024-07-15 10:34:00.091547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.715 request: 00:17:11.715 { 00:17:11.715 "name": "TLSTEST", 00:17:11.715 "trtype": "tcp", 00:17:11.715 "traddr": "10.0.0.2", 00:17:11.715 "adrfam": "ipv4", 00:17:11.715 "trsvcid": "4420", 00:17:11.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.715 "prchk_reftag": false, 00:17:11.715 "prchk_guard": false, 00:17:11.715 "hdgst": false, 00:17:11.715 "ddgst": false, 00:17:11.715 "psk": "/tmp/tmp.V7EwLmJ7YP", 00:17:11.715 "method": "bdev_nvme_attach_controller", 00:17:11.715 "req_id": 1 00:17:11.715 } 00:17:11.715 Got JSON-RPC error response 00:17:11.715 response: 00:17:11.715 { 00:17:11.715 "code": -5, 00:17:11.715 "message": "Input/output error" 00:17:11.715 } 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1221507 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1221507 ']' 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1221507 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221507 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221507' 00:17:11.715 killing process with pid 1221507 00:17:11.715 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1221507 00:17:11.715 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.715 00:17:11.715 Latency(us) 00:17:11.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.715 =================================================================================================================== 00:17:11.715 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.715 [2024-07-15 10:34:00.144008] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:11.716 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1221507 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eEfGYfgxIe 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eEfGYfgxIe 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.973 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eEfGYfgxIe 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eEfGYfgxIe' 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1221630 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1221630 /var/tmp/bdevperf.sock 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1221630 ']' 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.974 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.974 [2024-07-15 10:34:00.448896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:11.974 [2024-07-15 10:34:00.448992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221630 ] 00:17:11.974 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.974 [2024-07-15 10:34:00.507301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.231 [2024-07-15 10:34:00.615923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.231 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.231 10:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:12.231 10:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.eEfGYfgxIe 00:17:12.489 [2024-07-15 10:34:00.971262] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.489 [2024-07-15 10:34:00.971389] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:12.489 [2024-07-15 10:34:00.980861] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:12.489 [2024-07-15 10:34:00.980897] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:12.489 [2024-07-15 10:34:00.980955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:12.489 [2024-07-15 10:34:00.981186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11edf90 (107): Transport endpoint is not connected 00:17:12.489 [2024-07-15 10:34:00.982173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11edf90 (9): Bad file descriptor 00:17:12.489 [2024-07-15 10:34:00.983171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:12.489 [2024-07-15 10:34:00.983193] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:12.489 [2024-07-15 10:34:00.983211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.489 request: 00:17:12.489 { 00:17:12.489 "name": "TLSTEST", 00:17:12.489 "trtype": "tcp", 00:17:12.489 "traddr": "10.0.0.2", 00:17:12.489 "adrfam": "ipv4", 00:17:12.489 "trsvcid": "4420", 00:17:12.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.489 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:12.489 "prchk_reftag": false, 00:17:12.489 "prchk_guard": false, 00:17:12.489 "hdgst": false, 00:17:12.489 "ddgst": false, 00:17:12.489 "psk": "/tmp/tmp.eEfGYfgxIe", 00:17:12.489 "method": "bdev_nvme_attach_controller", 00:17:12.489 "req_id": 1 00:17:12.489 } 00:17:12.489 Got JSON-RPC error response 00:17:12.489 response: 00:17:12.489 { 00:17:12.489 "code": -5, 00:17:12.489 "message": "Input/output error" 00:17:12.489 } 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1221630 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1221630 ']' 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1221630 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221630 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221630' 00:17:12.489 killing process with pid 1221630 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1221630 00:17:12.489 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.489 00:17:12.489 Latency(us) 00:17:12.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.489 =================================================================================================================== 00:17:12.489 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.489 [2024-07-15 10:34:01.035679] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:12.489 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1221630 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eEfGYfgxIe 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eEfGYfgxIe 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eEfGYfgxIe 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eEfGYfgxIe' 00:17:12.747 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1221768 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1221768 /var/tmp/bdevperf.sock 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1221768 ']' 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.005 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.005 [2024-07-15 10:34:01.338813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:13.005 [2024-07-15 10:34:01.338906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221768 ] 00:17:13.005 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.005 [2024-07-15 10:34:01.396591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.005 [2024-07-15 10:34:01.499084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.262 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.262 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:13.262 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eEfGYfgxIe 00:17:13.519 [2024-07-15 10:34:01.883234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.519 [2024-07-15 10:34:01.883331] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:13.519 [2024-07-15 10:34:01.893267] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:13.519 [2024-07-15 10:34:01.893303] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:13.519 [2024-07-15 10:34:01.893358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:13.519 [2024-07-15 10:34:01.894208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f90 (107): Transport endpoint is not connected 00:17:13.519 [2024-07-15 10:34:01.895199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f90 (9): Bad file descriptor 00:17:13.519 [2024-07-15 10:34:01.896199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:13.519 [2024-07-15 10:34:01.896219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:13.519 [2024-07-15 10:34:01.896235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:13.519 request: 00:17:13.519 { 00:17:13.519 "name": "TLSTEST", 00:17:13.519 "trtype": "tcp", 00:17:13.519 "traddr": "10.0.0.2", 00:17:13.519 "adrfam": "ipv4", 00:17:13.519 "trsvcid": "4420", 00:17:13.519 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:13.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.520 "prchk_reftag": false, 00:17:13.520 "prchk_guard": false, 00:17:13.520 "hdgst": false, 00:17:13.520 "ddgst": false, 00:17:13.520 "psk": "/tmp/tmp.eEfGYfgxIe", 00:17:13.520 "method": "bdev_nvme_attach_controller", 00:17:13.520 "req_id": 1 00:17:13.520 } 00:17:13.520 Got JSON-RPC error response 00:17:13.520 response: 00:17:13.520 { 00:17:13.520 "code": -5, 00:17:13.520 "message": "Input/output error" 00:17:13.520 } 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1221768 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1221768 ']' 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1221768 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221768 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221768' 00:17:13.520 killing process with pid 1221768 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1221768 00:17:13.520 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.520 00:17:13.520 Latency(us) 00:17:13.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.520 =================================================================================================================== 00:17:13.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.520 [2024-07-15 10:34:01.947274] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:13.520 10:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1221768 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1221907 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1221907 /var/tmp/bdevperf.sock 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1221907 ']' 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.778 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.778 [2024-07-15 10:34:02.240706] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:13.778 [2024-07-15 10:34:02.240821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221907 ] 00:17:13.778 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.778 [2024-07-15 10:34:02.299581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.035 [2024-07-15 10:34:02.408090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.035 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.035 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:14.035 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:14.293 [2024-07-15 10:34:02.795556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.293 [2024-07-15 10:34:02.797349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4c770 (9): Bad file descriptor 00:17:14.293 [2024-07-15 10:34:02.798346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:14.293 [2024-07-15 10:34:02.798367] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.293 [2024-07-15 10:34:02.798385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:14.293 request: 00:17:14.293 { 00:17:14.293 "name": "TLSTEST", 00:17:14.293 "trtype": "tcp", 00:17:14.293 "traddr": "10.0.0.2", 00:17:14.293 "adrfam": "ipv4", 00:17:14.293 "trsvcid": "4420", 00:17:14.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.293 "prchk_reftag": false, 00:17:14.293 "prchk_guard": false, 00:17:14.293 "hdgst": false, 00:17:14.293 "ddgst": false, 00:17:14.293 "method": "bdev_nvme_attach_controller", 00:17:14.293 "req_id": 1 00:17:14.293 } 00:17:14.293 Got JSON-RPC error response 00:17:14.293 response: 00:17:14.293 { 00:17:14.293 "code": -5, 00:17:14.293 "message": "Input/output error" 00:17:14.293 } 00:17:14.293 10:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1221907 00:17:14.293 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1221907 ']' 00:17:14.293 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1221907 00:17:14.293 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:14.293 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.293 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221907 00:17:14.551 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:14.551 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:14.551 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221907' 00:17:14.551 killing process with pid 1221907 00:17:14.551 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1221907 00:17:14.551 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.551 00:17:14.551 Latency(us) 00:17:14.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.551 =================================================================================================================== 00:17:14.551 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.551 10:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1221907 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1218397 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1218397 ']' 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1218397 00:17:14.551 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1218397 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1218397' 00:17:14.808 killing process with pid 1218397 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1218397 00:17:14.808 [2024-07-15 10:34:03.127626] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:14.808 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1218397 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.kuFUBOcwav 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.kuFUBOcwav 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1222054 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1222054 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1222054 ']' 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.066 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.066 [2024-07-15 10:34:03.500733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:15.066 [2024-07-15 10:34:03.500831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.066 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.066 [2024-07-15 10:34:03.559957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.323 [2024-07-15 10:34:03.663806] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.323 [2024-07-15 10:34:03.663861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.323 [2024-07-15 10:34:03.663880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.323 [2024-07-15 10:34:03.663891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.323 [2024-07-15 10:34:03.663901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.323 [2024-07-15 10:34:03.663930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.kuFUBOcwav 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kuFUBOcwav 00:17:15.323 10:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.580 [2024-07-15 10:34:04.008513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.580 10:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:15.837 10:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.095 [2024-07-15 10:34:04.501828] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.095 [2024-07-15 10:34:04.502036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.095 10:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.353 malloc0 00:17:16.353 10:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.610 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:16.868 [2024-07-15 10:34:05.238998] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kuFUBOcwav 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kuFUBOcwav' 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1222333 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1222333 /var/tmp/bdevperf.sock 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1222333 ']' 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.869 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.869 [2024-07-15 10:34:05.300966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:16.869 [2024-07-15 10:34:05.301042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222333 ] 00:17:16.869 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.869 [2024-07-15 10:34:05.359268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.126 [2024-07-15 10:34:05.467774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.126 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.126 10:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:17.127 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:17.384 [2024-07-15 10:34:05.803774] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.384 [2024-07-15 10:34:05.803922] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:17.384 TLSTESTn1 00:17:17.384 10:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:17.641 Running I/O for 10 seconds... 00:17:27.605 00:17:27.605 Latency(us) 00:17:27.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.605 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:27.605 Verification LBA range: start 0x0 length 0x2000 00:17:27.605 TLSTESTn1 : 10.02 3631.62 14.19 0.00 0.00 35182.96 6165.24 33787.45 00:17:27.605 =================================================================================================================== 00:17:27.605 Total : 3631.62 14.19 0.00 0.00 35182.96 6165.24 33787.45 00:17:27.605 0 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1222333 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1222333 ']' 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1222333 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222333 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222333' 00:17:27.605 killing process with pid 1222333 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1222333 00:17:27.605 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.605 00:17:27.605 Latency(us) 00:17:27.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.605 =================================================================================================================== 00:17:27.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.605 [2024-07-15 10:34:16.085541] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:27.605 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1222333 00:17:27.863 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.kuFUBOcwav 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kuFUBOcwav 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kuFUBOcwav 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kuFUBOcwav 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kuFUBOcwav' 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1223539 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1223539 /var/tmp/bdevperf.sock 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1223539 ']' 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.864 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.864 [2024-07-15 10:34:16.400412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:27.864 [2024-07-15 10:34:16.400490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223539 ] 00:17:28.122 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.122 [2024-07-15 10:34:16.457846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.122 [2024-07-15 10:34:16.560404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.122 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.122 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:28.122 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:28.379 [2024-07-15 10:34:16.892957] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.379 [2024-07-15 10:34:16.893033] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:28.379 [2024-07-15 10:34:16.893048] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.kuFUBOcwav 00:17:28.379 request: 00:17:28.379 { 00:17:28.379 "name": "TLSTEST", 00:17:28.379 "trtype": "tcp", 00:17:28.379 "traddr": "10.0.0.2", 00:17:28.379 "adrfam": "ipv4", 00:17:28.379 "trsvcid": "4420", 00:17:28.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.379 "prchk_reftag": false, 00:17:28.379 "prchk_guard": false, 00:17:28.379 "hdgst": false, 00:17:28.379 "ddgst": false, 00:17:28.379 "psk": "/tmp/tmp.kuFUBOcwav", 00:17:28.379 "method": "bdev_nvme_attach_controller", 00:17:28.379 "req_id": 1 00:17:28.379 } 00:17:28.379 Got JSON-RPC error response 00:17:28.379 response: 00:17:28.379 { 00:17:28.379 "code": -1, 00:17:28.379 "message": "Operation not permitted" 00:17:28.379 } 00:17:28.379 10:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1223539 00:17:28.379 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1223539 ']' 00:17:28.379 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1223539 00:17:28.379 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:28.379 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.379 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223539 00:17:28.637 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:28.637 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:28.637 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223539' 00:17:28.637 killing process with pid 1223539 00:17:28.637 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1223539 00:17:28.637 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.637 00:17:28.637 Latency(us) 00:17:28.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.637 =================================================================================================================== 00:17:28.637 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.637 10:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1223539 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1222054 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1222054 ']' 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1222054 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.637 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222054 00:17:28.895 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:28.895 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:28.895 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222054' 00:17:28.895 killing process with pid 1222054 00:17:28.895 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1222054 00:17:28.895 [2024-07-15 10:34:17.191467] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:28.895 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1222054 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1223684 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1223684 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1223684 ']' 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.153 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.153 [2024-07-15 10:34:17.508335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:29.153 [2024-07-15 10:34:17.508433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.153 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.153 [2024-07-15 10:34:17.569993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.153 [2024-07-15 10:34:17.668817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.153 [2024-07-15 10:34:17.668891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.153 [2024-07-15 10:34:17.668906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.153 [2024-07-15 10:34:17.668918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.153 [2024-07-15 10:34:17.668945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.153 [2024-07-15 10:34:17.668972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.kuFUBOcwav 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kuFUBOcwav 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.kuFUBOcwav 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kuFUBOcwav 00:17:29.412 10:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:29.669 [2024-07-15 10:34:18.025001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.669 10:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:29.927 10:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:30.186 [2024-07-15 10:34:18.518309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.186 [2024-07-15 10:34:18.518517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.186 10:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:30.475 malloc0 00:17:30.475 10:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:30.756 [2024-07-15 10:34:19.250336] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:30.756 [2024-07-15 10:34:19.250372] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:30.756 [2024-07-15 10:34:19.250418] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:30.756 request: 00:17:30.756 { 00:17:30.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.756 "host": "nqn.2016-06.io.spdk:host1", 00:17:30.756 "psk": "/tmp/tmp.kuFUBOcwav", 00:17:30.756 "method": "nvmf_subsystem_add_host", 00:17:30.756 "req_id": 1 00:17:30.756 } 00:17:30.756 Got JSON-RPC error response 00:17:30.756 response: 00:17:30.756 { 00:17:30.756 "code": -32603, 00:17:30.756 "message": "Internal error" 00:17:30.756 } 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1223684 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1223684 ']' 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1223684 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223684 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223684' 00:17:30.756 killing process with pid 1223684 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1223684 00:17:30.756 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1223684 00:17:31.015 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.kuFUBOcwav 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1223979 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1223979 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1223979 ']' 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.273 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.273 [2024-07-15 10:34:19.618439] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:31.273 [2024-07-15 10:34:19.618535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.273 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.273 [2024-07-15 10:34:19.678867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.273 [2024-07-15 10:34:19.777720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.273 [2024-07-15 10:34:19.777776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.273 [2024-07-15 10:34:19.777814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.273 [2024-07-15 10:34:19.777827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.273 [2024-07-15 10:34:19.777836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.273 [2024-07-15 10:34:19.777862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.kuFUBOcwav 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kuFUBOcwav 00:17:31.532 10:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:31.789 [2024-07-15 10:34:20.149454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.789 10:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.046 10:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.303 [2024-07-15 10:34:20.642733] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.303 [2024-07-15 10:34:20.642965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.303 10:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.560 malloc0 00:17:32.560 10:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:32.817 10:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:33.075 [2024-07-15 10:34:21.375993] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1224256 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1224256 /var/tmp/bdevperf.sock 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1224256 ']' 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.075 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.076 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.076 [2024-07-15 10:34:21.429583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:33.076 [2024-07-15 10:34:21.429664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224256 ] 00:17:33.076 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.076 [2024-07-15 10:34:21.485305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.076 [2024-07-15 10:34:21.592657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.334 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.334 10:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:33.334 10:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:33.592 [2024-07-15 10:34:21.926213] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.592 [2024-07-15 10:34:21.926318] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:33.592 TLSTESTn1 00:17:33.592 10:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:33.850 10:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:33.850 "subsystems": [ 00:17:33.850 { 00:17:33.850 "subsystem": "keyring", 00:17:33.850 "config": [] 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "subsystem": "iobuf", 00:17:33.850 "config": [ 00:17:33.850 { 00:17:33.850 "method": "iobuf_set_options", 00:17:33.850 "params": { 00:17:33.850 "small_pool_count": 8192, 00:17:33.850 "large_pool_count": 1024, 00:17:33.850 "small_bufsize": 8192, 00:17:33.850 "large_bufsize": 135168 00:17:33.850 } 00:17:33.850 } 00:17:33.850 ] 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "subsystem": "sock", 00:17:33.850 "config": [ 00:17:33.850 { 00:17:33.850 "method": "sock_set_default_impl", 00:17:33.850 "params": { 00:17:33.850 "impl_name": "posix" 00:17:33.850 } 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "method": "sock_impl_set_options", 00:17:33.850 "params": { 00:17:33.850 "impl_name": "ssl", 00:17:33.850 "recv_buf_size": 4096, 00:17:33.850 "send_buf_size": 4096, 00:17:33.850 "enable_recv_pipe": true, 00:17:33.850 "enable_quickack": false, 00:17:33.850 "enable_placement_id": 0, 00:17:33.850 "enable_zerocopy_send_server": true, 00:17:33.850 "enable_zerocopy_send_client": false, 00:17:33.850 "zerocopy_threshold": 0, 00:17:33.850 "tls_version": 0, 00:17:33.850 "enable_ktls": false 00:17:33.850 } 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "method": "sock_impl_set_options", 00:17:33.850 "params": { 00:17:33.850 "impl_name": "posix", 00:17:33.850 "recv_buf_size": 2097152, 00:17:33.850 "send_buf_size": 2097152, 00:17:33.850 "enable_recv_pipe": true, 00:17:33.850 "enable_quickack": false, 00:17:33.850 "enable_placement_id": 0, 00:17:33.850 "enable_zerocopy_send_server": true, 00:17:33.850 "enable_zerocopy_send_client": false, 00:17:33.850 "zerocopy_threshold": 0, 00:17:33.850 "tls_version": 0, 00:17:33.850 "enable_ktls": false 00:17:33.850 } 00:17:33.850 } 00:17:33.850 ] 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "subsystem": "vmd", 00:17:33.850 "config": [] 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "subsystem": "accel", 00:17:33.850 "config": [ 00:17:33.850 { 00:17:33.850 "method": "accel_set_options", 00:17:33.850 "params": { 00:17:33.850 "small_cache_size": 128, 00:17:33.850 "large_cache_size": 16, 00:17:33.850 "task_count": 2048, 00:17:33.850 "sequence_count": 2048, 00:17:33.850 "buf_count": 2048 00:17:33.850 } 00:17:33.850 } 00:17:33.850 ] 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "subsystem": "bdev", 00:17:33.850 "config": [ 00:17:33.850 { 00:17:33.850 "method": "bdev_set_options", 00:17:33.850 "params": { 00:17:33.850 "bdev_io_pool_size": 65535, 00:17:33.850 "bdev_io_cache_size": 256, 00:17:33.850 "bdev_auto_examine": true, 00:17:33.850 "iobuf_small_cache_size": 128, 00:17:33.850 "iobuf_large_cache_size": 16 00:17:33.850 } 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "method": "bdev_raid_set_options", 00:17:33.850 "params": { 00:17:33.850 "process_window_size_kb": 1024 00:17:33.850 } 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "method": "bdev_iscsi_set_options", 00:17:33.850 "params": { 00:17:33.850 "timeout_sec": 30 00:17:33.850 } 00:17:33.850 }, 00:17:33.850 { 00:17:33.850 "method": "bdev_nvme_set_options", 00:17:33.850 "params": { 00:17:33.850 "action_on_timeout": "none", 00:17:33.850 "timeout_us": 0, 00:17:33.850 "timeout_admin_us": 0, 00:17:33.850 "keep_alive_timeout_ms": 10000, 00:17:33.850 "arbitration_burst": 0, 00:17:33.850 "low_priority_weight": 0, 00:17:33.850 "medium_priority_weight": 0, 00:17:33.850 "high_priority_weight": 0, 00:17:33.850 "nvme_adminq_poll_period_us": 10000, 00:17:33.850 "nvme_ioq_poll_period_us": 0, 00:17:33.850 "io_queue_requests": 0, 00:17:33.850 "delay_cmd_submit": true, 00:17:33.850 "transport_retry_count": 4, 00:17:33.850 "bdev_retry_count": 3, 00:17:33.850 "transport_ack_timeout": 0, 00:17:33.851 "ctrlr_loss_timeout_sec": 0, 00:17:33.851 "reconnect_delay_sec": 0, 00:17:33.851 "fast_io_fail_timeout_sec": 0, 00:17:33.851 "disable_auto_failback": false, 00:17:33.851 "generate_uuids": false, 00:17:33.851 "transport_tos": 0, 00:17:33.851 "nvme_error_stat": false, 00:17:33.851 "rdma_srq_size": 0, 00:17:33.851 "io_path_stat": false, 00:17:33.851 "allow_accel_sequence": false, 00:17:33.851 "rdma_max_cq_size": 0, 00:17:33.851 "rdma_cm_event_timeout_ms": 0, 00:17:33.851 "dhchap_digests": [ 00:17:33.851 "sha256", 00:17:33.851 "sha384", 00:17:33.851 "sha512" 00:17:33.851 ], 00:17:33.851 "dhchap_dhgroups": [ 00:17:33.851 "null", 00:17:33.851 "ffdhe2048", 00:17:33.851 "ffdhe3072", 00:17:33.851 "ffdhe4096", 00:17:33.851 "ffdhe6144", 00:17:33.851 "ffdhe8192" 00:17:33.851 ] 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "bdev_nvme_set_hotplug", 00:17:33.851 "params": { 00:17:33.851 "period_us": 100000, 00:17:33.851 "enable": false 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "bdev_malloc_create", 00:17:33.851 "params": { 00:17:33.851 "name": "malloc0", 00:17:33.851 "num_blocks": 8192, 00:17:33.851 "block_size": 4096, 00:17:33.851 "physical_block_size": 4096, 00:17:33.851 "uuid": "b22f1527-26b7-4b40-8ef4-9d7b77e9b5b3", 00:17:33.851 "optimal_io_boundary": 0 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "bdev_wait_for_examine" 00:17:33.851 } 00:17:33.851 ] 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "subsystem": "nbd", 00:17:33.851 "config": [] 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "subsystem": "scheduler", 00:17:33.851 "config": [ 00:17:33.851 { 00:17:33.851 "method": "framework_set_scheduler", 00:17:33.851 "params": { 00:17:33.851 "name": "static" 00:17:33.851 } 00:17:33.851 } 00:17:33.851 ] 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "subsystem": "nvmf", 00:17:33.851 "config": [ 00:17:33.851 { 00:17:33.851 "method": "nvmf_set_config", 00:17:33.851 "params": { 00:17:33.851 "discovery_filter": "match_any", 00:17:33.851 "admin_cmd_passthru": { 00:17:33.851 "identify_ctrlr": false 00:17:33.851 } 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_set_max_subsystems", 00:17:33.851 "params": { 00:17:33.851 "max_subsystems": 1024 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_set_crdt", 00:17:33.851 "params": { 00:17:33.851 "crdt1": 0, 00:17:33.851 "crdt2": 0, 00:17:33.851 "crdt3": 0 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_create_transport", 00:17:33.851 "params": { 00:17:33.851 "trtype": "TCP", 00:17:33.851 "max_queue_depth": 128, 00:17:33.851 "max_io_qpairs_per_ctrlr": 127, 00:17:33.851 "in_capsule_data_size": 4096, 00:17:33.851 "max_io_size": 131072, 00:17:33.851 "io_unit_size": 131072, 00:17:33.851 "max_aq_depth": 128, 00:17:33.851 "num_shared_buffers": 511, 00:17:33.851 "buf_cache_size": 4294967295, 00:17:33.851 "dif_insert_or_strip": false, 00:17:33.851 "zcopy": false, 00:17:33.851 "c2h_success": false, 00:17:33.851 "sock_priority": 0, 00:17:33.851 "abort_timeout_sec": 1, 00:17:33.851 "ack_timeout": 0, 00:17:33.851 "data_wr_pool_size": 0 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_create_subsystem", 00:17:33.851 "params": { 00:17:33.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.851 "allow_any_host": false, 00:17:33.851 "serial_number": "SPDK00000000000001", 00:17:33.851 "model_number": "SPDK bdev Controller", 00:17:33.851 "max_namespaces": 10, 00:17:33.851 "min_cntlid": 1, 00:17:33.851 "max_cntlid": 65519, 00:17:33.851 "ana_reporting": false 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_subsystem_add_host", 00:17:33.851 "params": { 00:17:33.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.851 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.851 "psk": "/tmp/tmp.kuFUBOcwav" 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_subsystem_add_ns", 00:17:33.851 "params": { 00:17:33.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.851 "namespace": { 00:17:33.851 "nsid": 1, 00:17:33.851 "bdev_name": "malloc0", 00:17:33.851 "nguid": "B22F152726B74B408EF49D7B77E9B5B3", 00:17:33.851 "uuid": "b22f1527-26b7-4b40-8ef4-9d7b77e9b5b3", 00:17:33.851 "no_auto_visible": false 00:17:33.851 } 00:17:33.851 } 00:17:33.851 }, 00:17:33.851 { 00:17:33.851 "method": "nvmf_subsystem_add_listener", 00:17:33.851 "params": { 00:17:33.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.851 "listen_address": { 00:17:33.851 "trtype": "TCP", 00:17:33.851 "adrfam": "IPv4", 00:17:33.851 "traddr": "10.0.0.2", 00:17:33.851 "trsvcid": "4420" 00:17:33.851 }, 00:17:33.851 "secure_channel": true 00:17:33.851 } 00:17:33.851 } 00:17:33.851 ] 00:17:33.851 } 00:17:33.851 ] 00:17:33.851 }' 00:17:33.851 10:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:34.109 10:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:34.109 "subsystems": [ 00:17:34.109 { 00:17:34.109 "subsystem": "keyring", 00:17:34.109 "config": [] 00:17:34.109 }, 00:17:34.109 { 00:17:34.109 "subsystem": "iobuf", 00:17:34.109 "config": [ 00:17:34.109 { 00:17:34.109 "method": "iobuf_set_options", 00:17:34.109 "params": { 00:17:34.109 "small_pool_count": 8192, 00:17:34.109 "large_pool_count": 1024, 00:17:34.109 "small_bufsize": 8192, 00:17:34.109 "large_bufsize": 135168 00:17:34.109 } 00:17:34.109 } 00:17:34.109 ] 00:17:34.109 }, 00:17:34.109 { 00:17:34.109 "subsystem": "sock", 00:17:34.109 "config": [ 00:17:34.109 { 00:17:34.109 "method": "sock_set_default_impl", 00:17:34.109 "params": { 00:17:34.109 "impl_name": "posix" 00:17:34.109 } 00:17:34.109 }, 00:17:34.109 { 00:17:34.109 "method": "sock_impl_set_options", 00:17:34.109 "params": { 00:17:34.109 "impl_name": "ssl", 00:17:34.109 "recv_buf_size": 4096, 00:17:34.109 "send_buf_size": 4096, 00:17:34.109 "enable_recv_pipe": true, 00:17:34.109 "enable_quickack": false, 00:17:34.109 "enable_placement_id": 0, 00:17:34.109 "enable_zerocopy_send_server": true, 00:17:34.109 "enable_zerocopy_send_client": false, 00:17:34.109 "zerocopy_threshold": 0, 00:17:34.110 "tls_version": 0, 00:17:34.110 "enable_ktls": false 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "sock_impl_set_options", 00:17:34.110 "params": { 00:17:34.110 "impl_name": "posix", 00:17:34.110 "recv_buf_size": 2097152, 00:17:34.110 "send_buf_size": 2097152, 00:17:34.110 "enable_recv_pipe": true, 00:17:34.110 "enable_quickack": false, 00:17:34.110 "enable_placement_id": 0, 00:17:34.110 "enable_zerocopy_send_server": true, 00:17:34.110 "enable_zerocopy_send_client": false, 00:17:34.110 "zerocopy_threshold": 0, 00:17:34.110 "tls_version": 0, 00:17:34.110 "enable_ktls": false 00:17:34.110 } 00:17:34.110 } 00:17:34.110 ] 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "subsystem": "vmd", 00:17:34.110 "config": [] 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "subsystem": "accel", 00:17:34.110 "config": [ 00:17:34.110 { 00:17:34.110 "method": "accel_set_options", 00:17:34.110 "params": { 00:17:34.110 "small_cache_size": 128, 00:17:34.110 "large_cache_size": 16, 00:17:34.110 "task_count": 2048, 00:17:34.110 "sequence_count": 2048, 00:17:34.110 "buf_count": 2048 00:17:34.110 } 00:17:34.110 } 00:17:34.110 ] 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "subsystem": "bdev", 00:17:34.110 "config": [ 00:17:34.110 { 00:17:34.110 "method": "bdev_set_options", 00:17:34.110 "params": { 00:17:34.110 "bdev_io_pool_size": 65535, 00:17:34.110 "bdev_io_cache_size": 256, 00:17:34.110 "bdev_auto_examine": true, 00:17:34.110 "iobuf_small_cache_size": 128, 00:17:34.110 "iobuf_large_cache_size": 16 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "bdev_raid_set_options", 00:17:34.110 "params": { 00:17:34.110 "process_window_size_kb": 1024 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "bdev_iscsi_set_options", 00:17:34.110 "params": { 00:17:34.110 "timeout_sec": 30 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "bdev_nvme_set_options", 00:17:34.110 "params": { 00:17:34.110 "action_on_timeout": "none", 00:17:34.110 "timeout_us": 0, 00:17:34.110 "timeout_admin_us": 0, 00:17:34.110 "keep_alive_timeout_ms": 10000, 00:17:34.110 "arbitration_burst": 0, 00:17:34.110 "low_priority_weight": 0, 00:17:34.110 "medium_priority_weight": 0, 00:17:34.110 "high_priority_weight": 0, 00:17:34.110 "nvme_adminq_poll_period_us": 10000, 00:17:34.110 "nvme_ioq_poll_period_us": 0, 00:17:34.110 "io_queue_requests": 512, 00:17:34.110 "delay_cmd_submit": true, 00:17:34.110 "transport_retry_count": 4, 00:17:34.110 "bdev_retry_count": 3, 00:17:34.110 "transport_ack_timeout": 0, 00:17:34.110 "ctrlr_loss_timeout_sec": 0, 00:17:34.110 "reconnect_delay_sec": 0, 00:17:34.110 "fast_io_fail_timeout_sec": 0, 00:17:34.110 "disable_auto_failback": false, 00:17:34.110 "generate_uuids": false, 00:17:34.110 "transport_tos": 0, 00:17:34.110 "nvme_error_stat": false, 00:17:34.110 "rdma_srq_size": 0, 00:17:34.110 "io_path_stat": false, 00:17:34.110 "allow_accel_sequence": false, 00:17:34.110 "rdma_max_cq_size": 0, 00:17:34.110 "rdma_cm_event_timeout_ms": 0, 00:17:34.110 "dhchap_digests": [ 00:17:34.110 "sha256", 00:17:34.110 "sha384", 00:17:34.110 "sha512" 00:17:34.110 ], 00:17:34.110 "dhchap_dhgroups": [ 00:17:34.110 "null", 00:17:34.110 "ffdhe2048", 00:17:34.110 "ffdhe3072", 00:17:34.110 "ffdhe4096", 00:17:34.110 "ffdhe6144", 00:17:34.110 "ffdhe8192" 00:17:34.110 ] 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "bdev_nvme_attach_controller", 00:17:34.110 "params": { 00:17:34.110 "name": "TLSTEST", 00:17:34.110 "trtype": "TCP", 00:17:34.110 "adrfam": "IPv4", 00:17:34.110 "traddr": "10.0.0.2", 00:17:34.110 "trsvcid": "4420", 00:17:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.110 "prchk_reftag": false, 00:17:34.110 "prchk_guard": false, 00:17:34.110 "ctrlr_loss_timeout_sec": 0, 00:17:34.110 "reconnect_delay_sec": 0, 00:17:34.110 "fast_io_fail_timeout_sec": 0, 00:17:34.110 "psk": "/tmp/tmp.kuFUBOcwav", 00:17:34.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.110 "hdgst": false, 00:17:34.110 "ddgst": false 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "bdev_nvme_set_hotplug", 00:17:34.110 "params": { 00:17:34.110 "period_us": 100000, 00:17:34.110 "enable": false 00:17:34.110 } 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "method": "bdev_wait_for_examine" 00:17:34.110 } 00:17:34.110 ] 00:17:34.110 }, 00:17:34.110 { 00:17:34.110 "subsystem": "nbd", 00:17:34.110 "config": [] 00:17:34.110 } 00:17:34.110 ] 00:17:34.110 }' 00:17:34.110 10:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1224256 00:17:34.110 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1224256 ']' 00:17:34.110 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1224256 00:17:34.110 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:34.110 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.110 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224256 00:17:34.369 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:34.369 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:34.369 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224256' 00:17:34.369 killing process with pid 1224256 00:17:34.369 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1224256 00:17:34.369 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.369 00:17:34.369 Latency(us) 00:17:34.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.369 =================================================================================================================== 00:17:34.369 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.369 [2024-07-15 10:34:22.675262] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:34.369 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1224256 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1223979 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1223979 ']' 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1223979 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223979 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223979' 00:17:34.627 killing process with pid 1223979 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1223979 00:17:34.627 [2024-07-15 10:34:22.961902] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:34.627 10:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1223979 00:17:34.884 10:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:34.884 10:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.884 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.884 10:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:34.884 "subsystems": [ 00:17:34.884 { 00:17:34.884 "subsystem": "keyring", 00:17:34.884 "config": [] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "iobuf", 00:17:34.884 "config": [ 00:17:34.884 { 00:17:34.884 "method": "iobuf_set_options", 00:17:34.884 "params": { 00:17:34.884 "small_pool_count": 8192, 00:17:34.884 "large_pool_count": 1024, 00:17:34.884 "small_bufsize": 8192, 00:17:34.884 "large_bufsize": 135168 00:17:34.884 } 00:17:34.884 } 00:17:34.884 ] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "sock", 00:17:34.884 "config": [ 00:17:34.884 { 00:17:34.884 "method": "sock_set_default_impl", 00:17:34.884 "params": { 00:17:34.884 "impl_name": "posix" 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "sock_impl_set_options", 00:17:34.884 "params": { 00:17:34.884 "impl_name": "ssl", 00:17:34.884 "recv_buf_size": 4096, 00:17:34.884 "send_buf_size": 4096, 00:17:34.884 "enable_recv_pipe": true, 00:17:34.884 "enable_quickack": false, 00:17:34.884 "enable_placement_id": 0, 00:17:34.884 "enable_zerocopy_send_server": true, 00:17:34.884 "enable_zerocopy_send_client": false, 00:17:34.884 "zerocopy_threshold": 0, 00:17:34.884 "tls_version": 0, 00:17:34.884 "enable_ktls": false 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "sock_impl_set_options", 00:17:34.884 "params": { 00:17:34.884 "impl_name": "posix", 00:17:34.884 "recv_buf_size": 2097152, 00:17:34.884 "send_buf_size": 2097152, 00:17:34.884 "enable_recv_pipe": true, 00:17:34.884 "enable_quickack": false, 00:17:34.884 "enable_placement_id": 0, 00:17:34.884 "enable_zerocopy_send_server": true, 00:17:34.884 "enable_zerocopy_send_client": false, 00:17:34.884 "zerocopy_threshold": 0, 00:17:34.884 "tls_version": 0, 00:17:34.884 "enable_ktls": false 00:17:34.884 } 00:17:34.884 } 00:17:34.884 ] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "vmd", 00:17:34.884 "config": [] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "accel", 00:17:34.884 "config": [ 00:17:34.884 { 00:17:34.884 "method": "accel_set_options", 00:17:34.884 "params": { 00:17:34.884 "small_cache_size": 128, 00:17:34.884 "large_cache_size": 16, 00:17:34.884 "task_count": 2048, 00:17:34.884 "sequence_count": 2048, 00:17:34.884 "buf_count": 2048 00:17:34.884 } 00:17:34.884 } 00:17:34.884 ] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "bdev", 00:17:34.884 "config": [ 00:17:34.884 { 00:17:34.884 "method": "bdev_set_options", 00:17:34.884 "params": { 00:17:34.884 "bdev_io_pool_size": 65535, 00:17:34.884 "bdev_io_cache_size": 256, 00:17:34.884 "bdev_auto_examine": true, 00:17:34.884 "iobuf_small_cache_size": 128, 00:17:34.884 "iobuf_large_cache_size": 16 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "bdev_raid_set_options", 00:17:34.884 "params": { 00:17:34.884 "process_window_size_kb": 1024 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "bdev_iscsi_set_options", 00:17:34.884 "params": { 00:17:34.884 "timeout_sec": 30 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "bdev_nvme_set_options", 00:17:34.884 "params": { 00:17:34.884 "action_on_timeout": "none", 00:17:34.884 "timeout_us": 0, 00:17:34.884 "timeout_admin_us": 0, 00:17:34.884 "keep_alive_timeout_ms": 10000, 00:17:34.884 "arbitration_burst": 0, 00:17:34.884 "low_priority_weight": 0, 00:17:34.884 "medium_priority_weight": 0, 00:17:34.884 "high_priority_weight": 0, 00:17:34.884 "nvme_adminq_poll_period_us": 10000, 00:17:34.884 "nvme_ioq_poll_period_us": 0, 00:17:34.884 "io_queue_requests": 0, 00:17:34.884 "delay_cmd_submit": true, 00:17:34.884 "transport_retry_count": 4, 00:17:34.884 "bdev_retry_count": 3, 00:17:34.884 "transport_ack_timeout": 0, 00:17:34.884 "ctrlr_loss_timeout_sec": 0, 00:17:34.884 "reconnect_delay_sec": 0, 00:17:34.884 "fast_io_fail_timeout_sec": 0, 00:17:34.884 "disable_auto_failback": false, 00:17:34.884 "generate_uuids": false, 00:17:34.884 "transport_tos": 0, 00:17:34.884 "nvme_error_stat": false, 00:17:34.884 "rdma_srq_size": 0, 00:17:34.884 "io_path_stat": false, 00:17:34.884 "allow_accel_sequence": false, 00:17:34.884 "rdma_max_cq_size": 0, 00:17:34.884 "rdma_cm_event_timeout_ms": 0, 00:17:34.884 "dhchap_digests": [ 00:17:34.884 "sha256", 00:17:34.884 "sha384", 00:17:34.884 "sha512" 00:17:34.884 ], 00:17:34.884 "dhchap_dhgroups": [ 00:17:34.884 "null", 00:17:34.884 "ffdhe2048", 00:17:34.884 "ffdhe3072", 00:17:34.884 "ffdhe4096", 00:17:34.884 "ffdhe6144", 00:17:34.884 "ffdhe8192" 00:17:34.884 ] 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "bdev_nvme_set_hotplug", 00:17:34.884 "params": { 00:17:34.884 "period_us": 100000, 00:17:34.884 "enable": false 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "bdev_malloc_create", 00:17:34.884 "params": { 00:17:34.884 "name": "malloc0", 00:17:34.884 "num_blocks": 8192, 00:17:34.884 "block_size": 4096, 00:17:34.884 "physical_block_size": 4096, 00:17:34.884 "uuid": "b22f1527-26b7-4b40-8ef4-9d7b77e9b5b3", 00:17:34.884 "optimal_io_boundary": 0 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "bdev_wait_for_examine" 00:17:34.884 } 00:17:34.884 ] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "nbd", 00:17:34.884 "config": [] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "scheduler", 00:17:34.884 "config": [ 00:17:34.884 { 00:17:34.884 "method": "framework_set_scheduler", 00:17:34.884 "params": { 00:17:34.884 "name": "static" 00:17:34.884 } 00:17:34.884 } 00:17:34.884 ] 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "subsystem": "nvmf", 00:17:34.884 "config": [ 00:17:34.884 { 00:17:34.884 "method": "nvmf_set_config", 00:17:34.884 "params": { 00:17:34.884 "discovery_filter": "match_any", 00:17:34.884 "admin_cmd_passthru": { 00:17:34.884 "identify_ctrlr": false 00:17:34.884 } 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "nvmf_set_max_subsystems", 00:17:34.884 "params": { 00:17:34.884 "max_subsystems": 1024 00:17:34.884 } 00:17:34.884 }, 00:17:34.884 { 00:17:34.884 "method": "nvmf_set_crdt", 00:17:34.884 "params": { 00:17:34.885 "crdt1": 0, 00:17:34.885 "crdt2": 0, 00:17:34.885 "crdt3": 0 00:17:34.885 } 00:17:34.885 }, 00:17:34.885 { 00:17:34.885 "method": "nvmf_create_transport", 00:17:34.885 "params": { 00:17:34.885 "trtype": "TCP", 00:17:34.885 "max_queue_depth": 128, 00:17:34.885 "max_io_qpairs_per_ctrlr": 127, 00:17:34.885 "in_capsule_data_size": 4096, 00:17:34.885 "max_io_size": 131072, 00:17:34.885 "io_unit_size": 131072, 00:17:34.885 "max_aq_depth": 128, 00:17:34.885 "num_shared_buffers": 511, 00:17:34.885 "buf_cache_size": 4294967295, 00:17:34.885 "dif_insert_or_strip": false, 00:17:34.885 "zcopy": false, 00:17:34.885 "c2h_success": false, 00:17:34.885 "sock_priority": 0, 00:17:34.885 "abort_timeout_sec": 1, 00:17:34.885 "ack_timeout": 0, 00:17:34.885 "data_wr_pool_size": 0 00:17:34.885 } 00:17:34.885 }, 00:17:34.885 { 00:17:34.885 "method": "nvmf_create_subsystem", 00:17:34.885 "params": { 00:17:34.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.885 "allow_any_host": false, 00:17:34.885 "serial_number": "SPDK00000000000001", 00:17:34.885 "model_number": "SPDK bdev Controller", 00:17:34.885 "max_namespaces": 10, 00:17:34.885 "min_cntlid": 1, 00:17:34.885 "max_cntlid": 65519, 00:17:34.885 "ana_reporting": false 00:17:34.885 } 00:17:34.885 }, 00:17:34.885 { 00:17:34.885 "method": "nvmf_subsystem_add_host", 00:17:34.885 "params": { 00:17:34.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.885 "host": "nqn.2016-06.io.spdk:host1", 00:17:34.885 "psk": "/tmp/tmp.kuFUBOcwav" 00:17:34.885 } 00:17:34.885 }, 00:17:34.885 { 00:17:34.885 "method": "nvmf_subsystem_add_ns", 00:17:34.885 "params": { 00:17:34.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.885 "namespace": { 00:17:34.885 "nsid": 1, 00:17:34.885 "bdev_name": "malloc0", 00:17:34.885 "nguid": "B22F152726B74B408EF49D7B77E9B5B3", 00:17:34.885 "uuid": "b22f1527-26b7-4b40-8ef4-9d7b77e9b5b3", 00:17:34.885 "no_auto_visible": false 00:17:34.885 } 00:17:34.885 } 00:17:34.885 }, 00:17:34.885 { 00:17:34.885 "method": "nvmf_subsystem_add_listener", 00:17:34.885 "params": { 00:17:34.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.885 "listen_address": { 00:17:34.885 "trtype": "TCP", 00:17:34.885 "adrfam": "IPv4", 00:17:34.885 "traddr": "10.0.0.2", 00:17:34.885 "trsvcid": "4420" 00:17:34.885 }, 00:17:34.885 "secure_channel": true 00:17:34.885 } 00:17:34.885 } 00:17:34.885 ] 00:17:34.885 } 00:17:34.885 ] 00:17:34.885 }' 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1224423 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1224423 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1224423 ']' 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.885 10:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.885 [2024-07-15 10:34:23.297313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:34.885 [2024-07-15 10:34:23.297395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.885 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.885 [2024-07-15 10:34:23.363053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.142 [2024-07-15 10:34:23.470400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.142 [2024-07-15 10:34:23.470463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.143 [2024-07-15 10:34:23.470477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.143 [2024-07-15 10:34:23.470502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.143 [2024-07-15 10:34:23.470512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.143 [2024-07-15 10:34:23.470592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.421 [2024-07-15 10:34:23.706128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.421 [2024-07-15 10:34:23.722109] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:35.421 [2024-07-15 10:34:23.738168] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.421 [2024-07-15 10:34:23.759948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1224571 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1224571 /var/tmp/bdevperf.sock 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1224571 ']' 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.986 10:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:35.986 "subsystems": [ 00:17:35.986 { 00:17:35.986 "subsystem": "keyring", 00:17:35.986 "config": [] 00:17:35.986 }, 00:17:35.986 { 00:17:35.986 "subsystem": "iobuf", 00:17:35.986 "config": [ 00:17:35.986 { 00:17:35.986 "method": "iobuf_set_options", 00:17:35.986 "params": { 00:17:35.986 "small_pool_count": 8192, 00:17:35.986 "large_pool_count": 1024, 00:17:35.986 "small_bufsize": 8192, 00:17:35.986 "large_bufsize": 135168 00:17:35.986 } 00:17:35.986 } 00:17:35.986 ] 00:17:35.986 }, 00:17:35.986 { 00:17:35.986 "subsystem": "sock", 00:17:35.986 "config": [ 00:17:35.986 { 00:17:35.986 "method": "sock_set_default_impl", 00:17:35.986 "params": { 00:17:35.986 "impl_name": "posix" 00:17:35.986 } 00:17:35.986 }, 00:17:35.986 { 00:17:35.986 "method": "sock_impl_set_options", 00:17:35.986 "params": { 00:17:35.986 "impl_name": "ssl", 00:17:35.986 "recv_buf_size": 4096, 00:17:35.986 "send_buf_size": 4096, 00:17:35.986 "enable_recv_pipe": true, 00:17:35.986 "enable_quickack": false, 00:17:35.986 "enable_placement_id": 0, 00:17:35.986 "enable_zerocopy_send_server": true, 00:17:35.986 "enable_zerocopy_send_client": false, 00:17:35.986 "zerocopy_threshold": 0, 00:17:35.986 "tls_version": 0, 00:17:35.986 "enable_ktls": false 00:17:35.986 } 00:17:35.986 }, 00:17:35.986 { 00:17:35.986 "method": "sock_impl_set_options", 00:17:35.986 "params": { 00:17:35.987 "impl_name": "posix", 00:17:35.987 "recv_buf_size": 2097152, 00:17:35.987 "send_buf_size": 2097152, 00:17:35.987 "enable_recv_pipe": true, 00:17:35.987 "enable_quickack": false, 00:17:35.987 "enable_placement_id": 0, 00:17:35.987 "enable_zerocopy_send_server": true, 00:17:35.987 "enable_zerocopy_send_client": false, 00:17:35.987 "zerocopy_threshold": 0, 00:17:35.987 "tls_version": 0, 00:17:35.987 "enable_ktls": false 00:17:35.987 } 00:17:35.987 } 00:17:35.987 ] 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "subsystem": "vmd", 00:17:35.987 "config": [] 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "subsystem": "accel", 00:17:35.987 "config": [ 00:17:35.987 { 00:17:35.987 "method": "accel_set_options", 00:17:35.987 "params": { 00:17:35.987 "small_cache_size": 128, 00:17:35.987 "large_cache_size": 16, 00:17:35.987 "task_count": 2048, 00:17:35.987 "sequence_count": 2048, 00:17:35.987 "buf_count": 2048 00:17:35.987 } 00:17:35.987 } 00:17:35.987 ] 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "subsystem": "bdev", 00:17:35.987 "config": [ 00:17:35.987 { 00:17:35.987 "method": "bdev_set_options", 00:17:35.987 "params": { 00:17:35.987 "bdev_io_pool_size": 65535, 00:17:35.987 "bdev_io_cache_size": 256, 00:17:35.987 "bdev_auto_examine": true, 00:17:35.987 "iobuf_small_cache_size": 128, 00:17:35.987 "iobuf_large_cache_size": 16 00:17:35.987 } 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "method": "bdev_raid_set_options", 00:17:35.987 "params": { 00:17:35.987 "process_window_size_kb": 1024 00:17:35.987 } 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "method": "bdev_iscsi_set_options", 00:17:35.987 "params": { 00:17:35.987 "timeout_sec": 30 00:17:35.987 } 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "method": "bdev_nvme_set_options", 00:17:35.987 "params": { 00:17:35.987 "action_on_timeout": "none", 00:17:35.987 "timeout_us": 0, 00:17:35.987 "timeout_admin_us": 0, 00:17:35.987 "keep_alive_timeout_ms": 10000, 00:17:35.987 "arbitration_burst": 0, 00:17:35.987 "low_priority_weight": 0, 00:17:35.987 "medium_priority_weight": 0, 00:17:35.987 "high_priority_weight": 0, 00:17:35.987 "nvme_adminq_poll_period_us": 10000, 00:17:35.987 "nvme_ioq_poll_period_us": 0, 00:17:35.987 "io_queue_requests": 512, 00:17:35.987 "delay_cmd_submit": true, 00:17:35.987 "transport_retry_count": 4, 00:17:35.987 "bdev_retry_count": 3, 00:17:35.987 "transport_ack_timeout": 0, 00:17:35.987 "ctrlr_loss_timeout_sec": 0, 00:17:35.987 "reconnect_delay_sec": 0, 00:17:35.987 "fast_io_fail_timeout_sec": 0, 00:17:35.987 "disable_auto_failback": false, 00:17:35.987 "generate_uuids": false, 00:17:35.987 "transport_tos": 0, 00:17:35.987 "nvme_error_stat": false, 00:17:35.987 "rdma_srq_size": 0, 00:17:35.987 "io_path_stat": false, 00:17:35.987 "allow_accel_sequence": false, 00:17:35.987 "rdma_max_cq_size": 0, 00:17:35.987 "rdma_cm_event_timeout_ms": 0, 00:17:35.987 "dhchap_digests": [ 00:17:35.987 "sha256", 00:17:35.987 "sha384", 00:17:35.987 "sha512" 00:17:35.987 ], 00:17:35.987 "dhchap_dhgroups": [ 00:17:35.987 "null", 00:17:35.987 "ffdhe2048", 00:17:35.987 "ffdhe3072", 00:17:35.987 "ffdhe4096", 00:17:35.987 "ffdhe6144", 00:17:35.987 "ffdhe8192" 00:17:35.987 ] 00:17:35.987 } 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "method": "bdev_nvme_attach_controller", 00:17:35.987 "params": { 00:17:35.987 "name": "TLSTEST", 00:17:35.987 "trtype": "TCP", 00:17:35.987 "adrfam": "IPv4", 00:17:35.987 "traddr": "10.0.0.2", 00:17:35.987 "trsvcid": "4420", 00:17:35.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.987 "prchk_reftag": false, 00:17:35.987 "prchk_guard": false, 00:17:35.987 "ctrlr_loss_timeout_sec": 0, 00:17:35.987 "reconnect_delay_sec": 0, 00:17:35.987 "fast_io_fail_timeout_sec": 0, 00:17:35.987 "psk": "/tmp/tmp.kuFUBOcwav", 00:17:35.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.987 "hdgst": false, 00:17:35.987 "ddgst": false 00:17:35.987 } 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "method": "bdev_nvme_set_hotplug", 00:17:35.987 "params": { 00:17:35.987 "period_us": 100000, 00:17:35.987 "enable": false 00:17:35.987 } 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "method": "bdev_wait_for_examine" 00:17:35.987 } 00:17:35.987 ] 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "subsystem": "nbd", 00:17:35.987 "config": [] 00:17:35.987 } 00:17:35.987 ] 00:17:35.987 }' 00:17:35.987 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.987 10:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.987 [2024-07-15 10:34:24.349467] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:35.987 [2024-07-15 10:34:24.349555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224571 ] 00:17:35.987 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.987 [2024-07-15 10:34:24.405708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.987 [2024-07-15 10:34:24.512319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.246 [2024-07-15 10:34:24.679493] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.246 [2024-07-15 10:34:24.679625] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:36.810 10:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.810 10:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:36.810 10:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:37.068 Running I/O for 10 seconds... 00:17:47.024 00:17:47.024 Latency(us) 00:17:47.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.024 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.024 Verification LBA range: start 0x0 length 0x2000 00:17:47.024 TLSTESTn1 : 10.02 3515.22 13.73 0.00 0.00 36350.95 7767.23 31457.28 00:17:47.024 =================================================================================================================== 00:17:47.024 Total : 3515.22 13.73 0.00 0.00 36350.95 7767.23 31457.28 00:17:47.024 0 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1224571 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1224571 ']' 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1224571 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224571 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224571' 00:17:47.024 killing process with pid 1224571 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1224571 00:17:47.024 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.024 00:17:47.024 Latency(us) 00:17:47.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.024 =================================================================================================================== 00:17:47.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.024 [2024-07-15 10:34:35.538355] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.024 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1224571 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1224423 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1224423 ']' 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1224423 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224423 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224423' 00:17:47.282 killing process with pid 1224423 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1224423 00:17:47.282 [2024-07-15 10:34:35.831697] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:47.282 10:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1224423 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1226018 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1226018 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1226018 ']' 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.849 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.849 [2024-07-15 10:34:36.148527] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:47.849 [2024-07-15 10:34:36.148620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.849 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.849 [2024-07-15 10:34:36.209870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.849 [2024-07-15 10:34:36.307156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.849 [2024-07-15 10:34:36.307211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.849 [2024-07-15 10:34:36.307239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.849 [2024-07-15 10:34:36.307250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.849 [2024-07-15 10:34:36.307259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.849 [2024-07-15 10:34:36.307289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.kuFUBOcwav 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kuFUBOcwav 00:17:48.106 10:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.364 [2024-07-15 10:34:36.695582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.364 10:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.622 10:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.880 [2024-07-15 10:34:37.188991] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.880 [2024-07-15 10:34:37.189241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.880 10:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:49.137 malloc0 00:17:49.137 10:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:49.395 10:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kuFUBOcwav 00:17:49.653 [2024-07-15 10:34:38.065188] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1226192 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1226192 /var/tmp/bdevperf.sock 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1226192 ']' 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.653 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.653 [2024-07-15 10:34:38.130469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:49.653 [2024-07-15 10:34:38.130556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226192 ] 00:17:49.653 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.653 [2024-07-15 10:34:38.195003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.911 [2024-07-15 10:34:38.306568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.911 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.911 10:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.911 10:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kuFUBOcwav 00:17:50.169 10:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:50.428 [2024-07-15 10:34:38.960518] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.686 nvme0n1 00:17:50.686 10:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.686 Running I/O for 1 seconds... 00:17:52.059 00:17:52.059 Latency(us) 00:17:52.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.059 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.059 Verification LBA range: start 0x0 length 0x2000 00:17:52.059 nvme0n1 : 1.02 3547.67 13.86 0.00 0.00 35704.87 7330.32 37088.52 00:17:52.059 =================================================================================================================== 00:17:52.059 Total : 3547.67 13.86 0.00 0.00 35704.87 7330.32 37088.52 00:17:52.059 0 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1226192 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1226192 ']' 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1226192 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226192 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226192' 00:17:52.059 killing process with pid 1226192 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1226192 00:17:52.059 Received shutdown signal, test time was about 1.000000 seconds 00:17:52.059 00:17:52.059 Latency(us) 00:17:52.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.059 =================================================================================================================== 00:17:52.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1226192 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1226018 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1226018 ']' 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1226018 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226018 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226018' 00:17:52.059 killing process with pid 1226018 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1226018 00:17:52.059 [2024-07-15 10:34:40.499554] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:52.059 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1226018 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1226583 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1226583 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1226583 ']' 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.317 10:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.317 [2024-07-15 10:34:40.806920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:52.317 [2024-07-15 10:34:40.807017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.317 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.574 [2024-07-15 10:34:40.869277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.574 [2024-07-15 10:34:40.966162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.574 [2024-07-15 10:34:40.966221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.574 [2024-07-15 10:34:40.966249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.574 [2024-07-15 10:34:40.966260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.574 [2024-07-15 10:34:40.966270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.574 [2024-07-15 10:34:40.966294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.574 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.574 [2024-07-15 10:34:41.101939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.574 malloc0 00:17:52.831 [2024-07-15 10:34:41.132588] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.831 [2024-07-15 10:34:41.132838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1226610 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1226610 /var/tmp/bdevperf.sock 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1226610 ']' 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.831 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.831 [2024-07-15 10:34:41.201389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:52.831 [2024-07-15 10:34:41.201468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226610 ] 00:17:52.831 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.831 [2024-07-15 10:34:41.258259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.831 [2024-07-15 10:34:41.362919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.088 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.088 10:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.088 10:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kuFUBOcwav 00:17:53.344 10:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:53.601 [2024-07-15 10:34:42.004900] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.601 nvme0n1 00:17:53.601 10:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:53.859 Running I/O for 1 seconds... 00:17:54.793 00:17:54.793 Latency(us) 00:17:54.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.793 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:54.793 Verification LBA range: start 0x0 length 0x2000 00:17:54.793 nvme0n1 : 1.03 2904.02 11.34 0.00 0.00 43462.48 11553.75 41166.32 00:17:54.793 =================================================================================================================== 00:17:54.793 Total : 2904.02 11.34 0.00 0.00 43462.48 11553.75 41166.32 00:17:54.793 0 00:17:54.793 10:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:54.793 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.793 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.793 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.050 10:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:17:55.050 "subsystems": [ 00:17:55.050 { 00:17:55.050 "subsystem": "keyring", 00:17:55.050 "config": [ 00:17:55.050 { 00:17:55.050 "method": "keyring_file_add_key", 00:17:55.050 "params": { 00:17:55.050 "name": "key0", 00:17:55.050 "path": "/tmp/tmp.kuFUBOcwav" 00:17:55.050 } 00:17:55.050 } 00:17:55.050 ] 00:17:55.050 }, 00:17:55.050 { 00:17:55.050 "subsystem": "iobuf", 00:17:55.050 "config": [ 00:17:55.050 { 00:17:55.050 "method": "iobuf_set_options", 00:17:55.050 "params": { 00:17:55.050 "small_pool_count": 8192, 00:17:55.050 "large_pool_count": 1024, 00:17:55.050 "small_bufsize": 8192, 00:17:55.050 "large_bufsize": 135168 00:17:55.050 } 00:17:55.050 } 00:17:55.050 ] 00:17:55.050 }, 00:17:55.050 { 00:17:55.050 "subsystem": "sock", 00:17:55.050 "config": [ 00:17:55.050 { 00:17:55.050 "method": "sock_set_default_impl", 00:17:55.050 "params": { 00:17:55.050 "impl_name": "posix" 00:17:55.050 } 00:17:55.050 }, 00:17:55.050 { 00:17:55.050 "method": "sock_impl_set_options", 00:17:55.050 "params": { 00:17:55.050 "impl_name": "ssl", 00:17:55.050 "recv_buf_size": 4096, 00:17:55.050 "send_buf_size": 4096, 00:17:55.050 "enable_recv_pipe": true, 00:17:55.050 "enable_quickack": false, 00:17:55.050 "enable_placement_id": 0, 00:17:55.050 "enable_zerocopy_send_server": true, 00:17:55.050 "enable_zerocopy_send_client": false, 00:17:55.050 "zerocopy_threshold": 0, 00:17:55.050 "tls_version": 0, 00:17:55.050 "enable_ktls": false 00:17:55.050 } 00:17:55.050 }, 00:17:55.050 { 00:17:55.050 "method": "sock_impl_set_options", 00:17:55.050 "params": { 00:17:55.050 "impl_name": "posix", 00:17:55.050 "recv_buf_size": 2097152, 00:17:55.050 "send_buf_size": 2097152, 00:17:55.050 "enable_recv_pipe": true, 00:17:55.050 "enable_quickack": false, 00:17:55.050 "enable_placement_id": 0, 00:17:55.050 "enable_zerocopy_send_server": true, 00:17:55.050 "enable_zerocopy_send_client": false, 00:17:55.050 "zerocopy_threshold": 0, 00:17:55.050 "tls_version": 0, 00:17:55.050 "enable_ktls": false 00:17:55.050 } 00:17:55.050 } 00:17:55.050 ] 00:17:55.050 }, 00:17:55.050 { 00:17:55.051 "subsystem": "vmd", 00:17:55.051 "config": [] 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "subsystem": "accel", 00:17:55.051 "config": [ 00:17:55.051 { 00:17:55.051 "method": "accel_set_options", 00:17:55.051 "params": { 00:17:55.051 "small_cache_size": 128, 00:17:55.051 "large_cache_size": 16, 00:17:55.051 "task_count": 2048, 00:17:55.051 "sequence_count": 2048, 00:17:55.051 "buf_count": 2048 00:17:55.051 } 00:17:55.051 } 00:17:55.051 ] 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "subsystem": "bdev", 00:17:55.051 "config": [ 00:17:55.051 { 00:17:55.051 "method": "bdev_set_options", 00:17:55.051 "params": { 00:17:55.051 "bdev_io_pool_size": 65535, 00:17:55.051 "bdev_io_cache_size": 256, 00:17:55.051 "bdev_auto_examine": true, 00:17:55.051 "iobuf_small_cache_size": 128, 00:17:55.051 "iobuf_large_cache_size": 16 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "bdev_raid_set_options", 00:17:55.051 "params": { 00:17:55.051 "process_window_size_kb": 1024 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "bdev_iscsi_set_options", 00:17:55.051 "params": { 00:17:55.051 "timeout_sec": 30 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "bdev_nvme_set_options", 00:17:55.051 "params": { 00:17:55.051 "action_on_timeout": "none", 00:17:55.051 "timeout_us": 0, 00:17:55.051 "timeout_admin_us": 0, 00:17:55.051 "keep_alive_timeout_ms": 10000, 00:17:55.051 "arbitration_burst": 0, 00:17:55.051 "low_priority_weight": 0, 00:17:55.051 "medium_priority_weight": 0, 00:17:55.051 "high_priority_weight": 0, 00:17:55.051 "nvme_adminq_poll_period_us": 10000, 00:17:55.051 "nvme_ioq_poll_period_us": 0, 00:17:55.051 "io_queue_requests": 0, 00:17:55.051 "delay_cmd_submit": true, 00:17:55.051 "transport_retry_count": 4, 00:17:55.051 "bdev_retry_count": 3, 00:17:55.051 "transport_ack_timeout": 0, 00:17:55.051 "ctrlr_loss_timeout_sec": 0, 00:17:55.051 "reconnect_delay_sec": 0, 00:17:55.051 "fast_io_fail_timeout_sec": 0, 00:17:55.051 "disable_auto_failback": false, 00:17:55.051 "generate_uuids": false, 00:17:55.051 "transport_tos": 0, 00:17:55.051 "nvme_error_stat": false, 00:17:55.051 "rdma_srq_size": 0, 00:17:55.051 "io_path_stat": false, 00:17:55.051 "allow_accel_sequence": false, 00:17:55.051 "rdma_max_cq_size": 0, 00:17:55.051 "rdma_cm_event_timeout_ms": 0, 00:17:55.051 "dhchap_digests": [ 00:17:55.051 "sha256", 00:17:55.051 "sha384", 00:17:55.051 "sha512" 00:17:55.051 ], 00:17:55.051 "dhchap_dhgroups": [ 00:17:55.051 "null", 00:17:55.051 "ffdhe2048", 00:17:55.051 "ffdhe3072", 00:17:55.051 "ffdhe4096", 00:17:55.051 "ffdhe6144", 00:17:55.051 "ffdhe8192" 00:17:55.051 ] 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "bdev_nvme_set_hotplug", 00:17:55.051 "params": { 00:17:55.051 "period_us": 100000, 00:17:55.051 "enable": false 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "bdev_malloc_create", 00:17:55.051 "params": { 00:17:55.051 "name": "malloc0", 00:17:55.051 "num_blocks": 8192, 00:17:55.051 "block_size": 4096, 00:17:55.051 "physical_block_size": 4096, 00:17:55.051 "uuid": "ea7eacf5-17e7-435f-b2ad-aaffc9829bd3", 00:17:55.051 "optimal_io_boundary": 0 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "bdev_wait_for_examine" 00:17:55.051 } 00:17:55.051 ] 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "subsystem": "nbd", 00:17:55.051 "config": [] 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "subsystem": "scheduler", 00:17:55.051 "config": [ 00:17:55.051 { 00:17:55.051 "method": "framework_set_scheduler", 00:17:55.051 "params": { 00:17:55.051 "name": "static" 00:17:55.051 } 00:17:55.051 } 00:17:55.051 ] 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "subsystem": "nvmf", 00:17:55.051 "config": [ 00:17:55.051 { 00:17:55.051 "method": "nvmf_set_config", 00:17:55.051 "params": { 00:17:55.051 "discovery_filter": "match_any", 00:17:55.051 "admin_cmd_passthru": { 00:17:55.051 "identify_ctrlr": false 00:17:55.051 } 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_set_max_subsystems", 00:17:55.051 "params": { 00:17:55.051 "max_subsystems": 1024 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_set_crdt", 00:17:55.051 "params": { 00:17:55.051 "crdt1": 0, 00:17:55.051 "crdt2": 0, 00:17:55.051 "crdt3": 0 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_create_transport", 00:17:55.051 "params": { 00:17:55.051 "trtype": "TCP", 00:17:55.051 "max_queue_depth": 128, 00:17:55.051 "max_io_qpairs_per_ctrlr": 127, 00:17:55.051 "in_capsule_data_size": 4096, 00:17:55.051 "max_io_size": 131072, 00:17:55.051 "io_unit_size": 131072, 00:17:55.051 "max_aq_depth": 128, 00:17:55.051 "num_shared_buffers": 511, 00:17:55.051 "buf_cache_size": 4294967295, 00:17:55.051 "dif_insert_or_strip": false, 00:17:55.051 "zcopy": false, 00:17:55.051 "c2h_success": false, 00:17:55.051 "sock_priority": 0, 00:17:55.051 "abort_timeout_sec": 1, 00:17:55.051 "ack_timeout": 0, 00:17:55.051 "data_wr_pool_size": 0 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_create_subsystem", 00:17:55.051 "params": { 00:17:55.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.051 "allow_any_host": false, 00:17:55.051 "serial_number": "00000000000000000000", 00:17:55.051 "model_number": "SPDK bdev Controller", 00:17:55.051 "max_namespaces": 32, 00:17:55.051 "min_cntlid": 1, 00:17:55.051 "max_cntlid": 65519, 00:17:55.051 "ana_reporting": false 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_subsystem_add_host", 00:17:55.051 "params": { 00:17:55.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.051 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.051 "psk": "key0" 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_subsystem_add_ns", 00:17:55.051 "params": { 00:17:55.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.051 "namespace": { 00:17:55.051 "nsid": 1, 00:17:55.051 "bdev_name": "malloc0", 00:17:55.051 "nguid": "EA7EACF517E7435FB2ADAAFFC9829BD3", 00:17:55.051 "uuid": "ea7eacf5-17e7-435f-b2ad-aaffc9829bd3", 00:17:55.051 "no_auto_visible": false 00:17:55.051 } 00:17:55.051 } 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "method": "nvmf_subsystem_add_listener", 00:17:55.051 "params": { 00:17:55.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.051 "listen_address": { 00:17:55.051 "trtype": "TCP", 00:17:55.051 "adrfam": "IPv4", 00:17:55.051 "traddr": "10.0.0.2", 00:17:55.051 "trsvcid": "4420" 00:17:55.051 }, 00:17:55.051 "secure_channel": true 00:17:55.051 } 00:17:55.051 } 00:17:55.051 ] 00:17:55.051 } 00:17:55.051 ] 00:17:55.051 }' 00:17:55.051 10:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:55.309 10:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:17:55.309 "subsystems": [ 00:17:55.309 { 00:17:55.309 "subsystem": "keyring", 00:17:55.309 "config": [ 00:17:55.309 { 00:17:55.309 "method": "keyring_file_add_key", 00:17:55.309 "params": { 00:17:55.309 "name": "key0", 00:17:55.309 "path": "/tmp/tmp.kuFUBOcwav" 00:17:55.309 } 00:17:55.309 } 00:17:55.309 ] 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "subsystem": "iobuf", 00:17:55.309 "config": [ 00:17:55.309 { 00:17:55.309 "method": "iobuf_set_options", 00:17:55.309 "params": { 00:17:55.309 "small_pool_count": 8192, 00:17:55.309 "large_pool_count": 1024, 00:17:55.309 "small_bufsize": 8192, 00:17:55.309 "large_bufsize": 135168 00:17:55.309 } 00:17:55.309 } 00:17:55.309 ] 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "subsystem": "sock", 00:17:55.309 "config": [ 00:17:55.309 { 00:17:55.309 "method": "sock_set_default_impl", 00:17:55.309 "params": { 00:17:55.309 "impl_name": "posix" 00:17:55.309 } 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "method": "sock_impl_set_options", 00:17:55.309 "params": { 00:17:55.309 "impl_name": "ssl", 00:17:55.309 "recv_buf_size": 4096, 00:17:55.309 "send_buf_size": 4096, 00:17:55.309 "enable_recv_pipe": true, 00:17:55.309 "enable_quickack": false, 00:17:55.309 "enable_placement_id": 0, 00:17:55.309 "enable_zerocopy_send_server": true, 00:17:55.309 "enable_zerocopy_send_client": false, 00:17:55.309 "zerocopy_threshold": 0, 00:17:55.309 "tls_version": 0, 00:17:55.309 "enable_ktls": false 00:17:55.309 } 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "method": "sock_impl_set_options", 00:17:55.309 "params": { 00:17:55.309 "impl_name": "posix", 00:17:55.309 "recv_buf_size": 2097152, 00:17:55.309 "send_buf_size": 2097152, 00:17:55.309 "enable_recv_pipe": true, 00:17:55.309 "enable_quickack": false, 00:17:55.309 "enable_placement_id": 0, 00:17:55.309 "enable_zerocopy_send_server": true, 00:17:55.309 "enable_zerocopy_send_client": false, 00:17:55.309 "zerocopy_threshold": 0, 00:17:55.309 "tls_version": 0, 00:17:55.309 "enable_ktls": false 00:17:55.309 } 00:17:55.309 } 00:17:55.309 ] 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "subsystem": "vmd", 00:17:55.309 "config": [] 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "subsystem": "accel", 00:17:55.309 "config": [ 00:17:55.309 { 00:17:55.309 "method": "accel_set_options", 00:17:55.309 "params": { 00:17:55.309 "small_cache_size": 128, 00:17:55.309 "large_cache_size": 16, 00:17:55.309 "task_count": 2048, 00:17:55.309 "sequence_count": 2048, 00:17:55.309 "buf_count": 2048 00:17:55.309 } 00:17:55.309 } 00:17:55.309 ] 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "subsystem": "bdev", 00:17:55.309 "config": [ 00:17:55.309 { 00:17:55.309 "method": "bdev_set_options", 00:17:55.309 "params": { 00:17:55.309 "bdev_io_pool_size": 65535, 00:17:55.309 "bdev_io_cache_size": 256, 00:17:55.309 "bdev_auto_examine": true, 00:17:55.309 "iobuf_small_cache_size": 128, 00:17:55.309 "iobuf_large_cache_size": 16 00:17:55.309 } 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "method": "bdev_raid_set_options", 00:17:55.309 "params": { 00:17:55.309 "process_window_size_kb": 1024 00:17:55.309 } 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "method": "bdev_iscsi_set_options", 00:17:55.309 "params": { 00:17:55.309 "timeout_sec": 30 00:17:55.309 } 00:17:55.309 }, 00:17:55.309 { 00:17:55.309 "method": "bdev_nvme_set_options", 00:17:55.309 "params": { 00:17:55.309 "action_on_timeout": "none", 00:17:55.309 "timeout_us": 0, 00:17:55.309 "timeout_admin_us": 0, 00:17:55.309 "keep_alive_timeout_ms": 10000, 00:17:55.309 "arbitration_burst": 0, 00:17:55.309 "low_priority_weight": 0, 00:17:55.309 "medium_priority_weight": 0, 00:17:55.309 "high_priority_weight": 0, 00:17:55.309 "nvme_adminq_poll_period_us": 10000, 00:17:55.309 "nvme_ioq_poll_period_us": 0, 00:17:55.309 "io_queue_requests": 512, 00:17:55.309 "delay_cmd_submit": true, 00:17:55.310 "transport_retry_count": 4, 00:17:55.310 "bdev_retry_count": 3, 00:17:55.310 "transport_ack_timeout": 0, 00:17:55.310 "ctrlr_loss_timeout_sec": 0, 00:17:55.310 "reconnect_delay_sec": 0, 00:17:55.310 "fast_io_fail_timeout_sec": 0, 00:17:55.310 "disable_auto_failback": false, 00:17:55.310 "generate_uuids": false, 00:17:55.310 "transport_tos": 0, 00:17:55.310 "nvme_error_stat": false, 00:17:55.310 "rdma_srq_size": 0, 00:17:55.310 "io_path_stat": false, 00:17:55.310 "allow_accel_sequence": false, 00:17:55.310 "rdma_max_cq_size": 0, 00:17:55.310 "rdma_cm_event_timeout_ms": 0, 00:17:55.310 "dhchap_digests": [ 00:17:55.310 "sha256", 00:17:55.310 "sha384", 00:17:55.310 "sha512" 00:17:55.310 ], 00:17:55.310 "dhchap_dhgroups": [ 00:17:55.310 "null", 00:17:55.310 "ffdhe2048", 00:17:55.310 "ffdhe3072", 00:17:55.310 "ffdhe4096", 00:17:55.310 "ffdhe6144", 00:17:55.310 "ffdhe8192" 00:17:55.310 ] 00:17:55.310 } 00:17:55.310 }, 00:17:55.310 { 00:17:55.310 "method": "bdev_nvme_attach_controller", 00:17:55.310 "params": { 00:17:55.310 "name": "nvme0", 00:17:55.310 "trtype": "TCP", 00:17:55.310 "adrfam": "IPv4", 00:17:55.310 "traddr": "10.0.0.2", 00:17:55.310 "trsvcid": "4420", 00:17:55.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.310 "prchk_reftag": false, 00:17:55.310 "prchk_guard": false, 00:17:55.310 "ctrlr_loss_timeout_sec": 0, 00:17:55.310 "reconnect_delay_sec": 0, 00:17:55.310 "fast_io_fail_timeout_sec": 0, 00:17:55.310 "psk": "key0", 00:17:55.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.310 "hdgst": false, 00:17:55.310 "ddgst": false 00:17:55.310 } 00:17:55.310 }, 00:17:55.310 { 00:17:55.310 "method": "bdev_nvme_set_hotplug", 00:17:55.310 "params": { 00:17:55.310 "period_us": 100000, 00:17:55.310 "enable": false 00:17:55.310 } 00:17:55.310 }, 00:17:55.310 { 00:17:55.310 "method": "bdev_enable_histogram", 00:17:55.310 "params": { 00:17:55.310 "name": "nvme0n1", 00:17:55.310 "enable": true 00:17:55.310 } 00:17:55.310 }, 00:17:55.310 { 00:17:55.310 "method": "bdev_wait_for_examine" 00:17:55.310 } 00:17:55.310 ] 00:17:55.310 }, 00:17:55.310 { 00:17:55.310 "subsystem": "nbd", 00:17:55.310 "config": [] 00:17:55.310 } 00:17:55.310 ] 00:17:55.310 }' 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1226610 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1226610 ']' 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1226610 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226610 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226610' 00:17:55.310 killing process with pid 1226610 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1226610 00:17:55.310 Received shutdown signal, test time was about 1.000000 seconds 00:17:55.310 00:17:55.310 Latency(us) 00:17:55.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.310 =================================================================================================================== 00:17:55.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.310 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1226610 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1226583 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1226583 ']' 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1226583 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226583 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226583' 00:17:55.567 killing process with pid 1226583 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1226583 00:17:55.567 10:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1226583 00:17:55.825 10:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:55.825 10:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.825 10:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:17:55.825 "subsystems": [ 00:17:55.825 { 00:17:55.825 "subsystem": "keyring", 00:17:55.825 "config": [ 00:17:55.825 { 00:17:55.825 "method": "keyring_file_add_key", 00:17:55.825 "params": { 00:17:55.825 "name": "key0", 00:17:55.825 "path": "/tmp/tmp.kuFUBOcwav" 00:17:55.825 } 00:17:55.825 } 00:17:55.825 ] 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "subsystem": "iobuf", 00:17:55.825 "config": [ 00:17:55.825 { 00:17:55.825 "method": "iobuf_set_options", 00:17:55.825 "params": { 00:17:55.825 "small_pool_count": 8192, 00:17:55.825 "large_pool_count": 1024, 00:17:55.825 "small_bufsize": 8192, 00:17:55.825 "large_bufsize": 135168 00:17:55.825 } 00:17:55.825 } 00:17:55.825 ] 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "subsystem": "sock", 00:17:55.825 "config": [ 00:17:55.825 { 00:17:55.825 "method": "sock_set_default_impl", 00:17:55.825 "params": { 00:17:55.825 "impl_name": "posix" 00:17:55.825 } 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "method": "sock_impl_set_options", 00:17:55.825 "params": { 00:17:55.825 "impl_name": "ssl", 00:17:55.825 "recv_buf_size": 4096, 00:17:55.825 "send_buf_size": 4096, 00:17:55.825 "enable_recv_pipe": true, 00:17:55.825 "enable_quickack": false, 00:17:55.825 "enable_placement_id": 0, 00:17:55.825 "enable_zerocopy_send_server": true, 00:17:55.825 "enable_zerocopy_send_client": false, 00:17:55.825 "zerocopy_threshold": 0, 00:17:55.825 "tls_version": 0, 00:17:55.825 "enable_ktls": false 00:17:55.825 } 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "method": "sock_impl_set_options", 00:17:55.825 "params": { 00:17:55.825 "impl_name": "posix", 00:17:55.825 "recv_buf_size": 2097152, 00:17:55.825 "send_buf_size": 2097152, 00:17:55.825 "enable_recv_pipe": true, 00:17:55.825 "enable_quickack": false, 00:17:55.825 "enable_placement_id": 0, 00:17:55.825 "enable_zerocopy_send_server": true, 00:17:55.825 "enable_zerocopy_send_client": false, 00:17:55.825 "zerocopy_threshold": 0, 00:17:55.825 "tls_version": 0, 00:17:55.825 "enable_ktls": false 00:17:55.825 } 00:17:55.825 } 00:17:55.825 ] 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "subsystem": "vmd", 00:17:55.825 "config": [] 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "subsystem": "accel", 00:17:55.825 "config": [ 00:17:55.825 { 00:17:55.825 "method": "accel_set_options", 00:17:55.825 "params": { 00:17:55.825 "small_cache_size": 128, 00:17:55.825 "large_cache_size": 16, 00:17:55.825 "task_count": 2048, 00:17:55.825 "sequence_count": 2048, 00:17:55.825 "buf_count": 2048 00:17:55.825 } 00:17:55.825 } 00:17:55.825 ] 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "subsystem": "bdev", 00:17:55.825 "config": [ 00:17:55.825 { 00:17:55.825 "method": "bdev_set_options", 00:17:55.825 "params": { 00:17:55.825 "bdev_io_pool_size": 65535, 00:17:55.825 "bdev_io_cache_size": 256, 00:17:55.825 "bdev_auto_examine": true, 00:17:55.825 "iobuf_small_cache_size": 128, 00:17:55.825 "iobuf_large_cache_size": 16 00:17:55.825 } 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "method": "bdev_raid_set_options", 00:17:55.825 "params": { 00:17:55.825 "process_window_size_kb": 1024 00:17:55.825 } 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "method": "bdev_iscsi_set_options", 00:17:55.825 "params": { 00:17:55.825 "timeout_sec": 30 00:17:55.825 } 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "method": "bdev_nvme_set_options", 00:17:55.825 "params": { 00:17:55.825 "action_on_timeout": "none", 00:17:55.825 "timeout_us": 0, 00:17:55.825 "timeout_admin_us": 0, 00:17:55.825 "keep_alive_timeout_ms": 10000, 00:17:55.825 "arbitration_burst": 0, 00:17:55.825 "low_priority_weight": 0, 00:17:55.825 "medium_priority_weight": 0, 00:17:55.825 "high_priority_weight": 0, 00:17:55.825 "nvme_adminq_poll_period_us": 10000, 00:17:55.825 "nvme_ioq_poll_period_us": 0, 00:17:55.825 "io_queue_requests": 0, 00:17:55.825 "delay_cmd_submit": true, 00:17:55.825 "transport_retry_count": 4, 00:17:55.825 "bdev_retry_count": 3, 00:17:55.825 "transport_ack_timeout": 0, 00:17:55.825 "ctrlr_loss_timeout_sec": 0, 00:17:55.825 "reconnect_delay_sec": 0, 00:17:55.825 "fast_io_fail_timeout_sec": 0, 00:17:55.825 "disable_auto_failback": false, 00:17:55.825 "generate_uuids": false, 00:17:55.825 "transport_tos": 0, 00:17:55.825 "nvme_error_stat": false, 00:17:55.825 "rdma_srq_size": 0, 00:17:55.825 "io_path_stat": false, 00:17:55.825 "allow_accel_sequence": false, 00:17:55.825 "rdma_max_cq_size": 0, 00:17:55.825 "rdma_cm_event_timeout_ms": 0, 00:17:55.825 "dhchap_digests": [ 00:17:55.825 "sha256", 00:17:55.825 "sha384", 00:17:55.825 "sha512" 00:17:55.825 ], 00:17:55.825 "dhchap_dhgroups": [ 00:17:55.825 "null", 00:17:55.825 "ffdhe2048", 00:17:55.825 "ffdhe3072", 00:17:55.825 "ffdhe4096", 00:17:55.825 "ffdhe6144", 00:17:55.825 "ffdhe8192" 00:17:55.825 ] 00:17:55.825 } 00:17:55.825 }, 00:17:55.825 { 00:17:55.825 "method": "bdev_nvme_set_hotplug", 00:17:55.825 "params": { 00:17:55.825 "period_us": 100000, 00:17:55.826 "enable": false 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "bdev_malloc_create", 00:17:55.826 "params": { 00:17:55.826 "name": "malloc0", 00:17:55.826 "num_blocks": 8192, 00:17:55.826 "block_size": 4096, 00:17:55.826 "physical_block_size": 4096, 00:17:55.826 "uuid": "ea7eacf5-17e7-435f-b2ad-aaffc9829bd3", 00:17:55.826 "optimal_io_boundary": 0 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "bdev_wait_for_examine" 00:17:55.826 } 00:17:55.826 ] 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "subsystem": "nbd", 00:17:55.826 "config": [] 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "subsystem": "scheduler", 00:17:55.826 "config": [ 00:17:55.826 { 00:17:55.826 "method": "framework_set_scheduler", 00:17:55.826 "params": { 00:17:55.826 "name": "static" 00:17:55.826 } 00:17:55.826 } 00:17:55.826 ] 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "subsystem": "nvmf", 00:17:55.826 "config": [ 00:17:55.826 { 00:17:55.826 "method": "nvmf_set_config", 00:17:55.826 "params": { 00:17:55.826 "discovery_filter": "match_any", 00:17:55.826 "admin_cmd_passthru": { 00:17:55.826 "identify_ctrlr": false 00:17:55.826 } 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_set_max_subsystems", 00:17:55.826 "params": { 00:17:55.826 "max_subsystems": 1024 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_set_crdt", 00:17:55.826 "params": { 00:17:55.826 "crdt1": 0, 00:17:55.826 "crdt2": 0, 00:17:55.826 "crdt3": 0 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_create_transport", 00:17:55.826 "params": { 00:17:55.826 "trtype": "TCP", 00:17:55.826 "max_queue_depth": 128, 00:17:55.826 "max_io_qpairs_per_ctrlr": 127, 00:17:55.826 "in_capsule_data_size": 4096, 00:17:55.826 "max_io_size": 131072, 00:17:55.826 "io_unit_size": 131072, 00:17:55.826 "max_aq_depth": 128, 00:17:55.826 "num_shared_buffers": 511, 00:17:55.826 "buf_cache_size": 4294967295, 00:17:55.826 "dif_insert_or_strip": false, 00:17:55.826 "zcopy": false, 00:17:55.826 "c2h_success": false, 00:17:55.826 "sock_priority": 0, 00:17:55.826 "abort_timeout_sec": 1, 00:17:55.826 "ack_timeout": 0, 00:17:55.826 "data_wr_pool_size": 0 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_create_subsystem", 00:17:55.826 "params": { 00:17:55.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.826 "allow_any_host": false, 00:17:55.826 "serial_number": "00000000000000000000", 00:17:55.826 "model_number": "SPDK bdev Controller", 00:17:55.826 "max_namespaces": 32, 00:17:55.826 "min_cntlid": 1, 00:17:55.826 "max_cntlid": 65519, 00:17:55.826 "ana_reporting": false 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_subsystem_add_host", 00:17:55.826 "params": { 00:17:55.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.826 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.826 "psk": "key0" 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_subsystem_add_ns", 00:17:55.826 "params": { 00:17:55.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.826 "namespace": { 00:17:55.826 "nsid": 1, 00:17:55.826 "bdev_name": "malloc0", 00:17:55.826 "nguid": "EA7EACF517E7435FB2ADAAFFC9829BD3", 00:17:55.826 "uuid": "ea7eacf5-17e7-435f-b2ad-aaffc9829bd3", 00:17:55.826 "no_auto_visible": false 00:17:55.826 } 00:17:55.826 } 00:17:55.826 }, 00:17:55.826 { 00:17:55.826 "method": "nvmf_subsystem_add_listener", 00:17:55.826 "params": { 00:17:55.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.826 "listen_address": { 00:17:55.826 "trtype": "TCP", 00:17:55.826 "adrfam": "IPv4", 00:17:55.826 "traddr": "10.0.0.2", 00:17:55.826 "trsvcid": "4420" 00:17:55.826 }, 00:17:55.826 "secure_channel": true 00:17:55.826 } 00:17:55.826 } 00:17:55.826 ] 00:17:55.826 } 00:17:55.826 ] 00:17:55.826 }' 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1227018 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1227018 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1227018 ']' 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.826 10:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.826 [2024-07-15 10:34:44.262547] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:55.826 [2024-07-15 10:34:44.262621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.826 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.826 [2024-07-15 10:34:44.324958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.083 [2024-07-15 10:34:44.435466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.084 [2024-07-15 10:34:44.435523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.084 [2024-07-15 10:34:44.435552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.084 [2024-07-15 10:34:44.435563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.084 [2024-07-15 10:34:44.435573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.084 [2024-07-15 10:34:44.435649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.341 [2024-07-15 10:34:44.670906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.341 [2024-07-15 10:34:44.702913] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.341 [2024-07-15 10:34:44.711981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1227169 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1227169 /var/tmp/bdevperf.sock 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1227169 ']' 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.907 10:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:17:56.907 "subsystems": [ 00:17:56.907 { 00:17:56.907 "subsystem": "keyring", 00:17:56.907 "config": [ 00:17:56.907 { 00:17:56.907 "method": "keyring_file_add_key", 00:17:56.907 "params": { 00:17:56.907 "name": "key0", 00:17:56.907 "path": "/tmp/tmp.kuFUBOcwav" 00:17:56.907 } 00:17:56.907 } 00:17:56.907 ] 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "subsystem": "iobuf", 00:17:56.907 "config": [ 00:17:56.907 { 00:17:56.907 "method": "iobuf_set_options", 00:17:56.907 "params": { 00:17:56.907 "small_pool_count": 8192, 00:17:56.907 "large_pool_count": 1024, 00:17:56.907 "small_bufsize": 8192, 00:17:56.907 "large_bufsize": 135168 00:17:56.907 } 00:17:56.907 } 00:17:56.907 ] 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "subsystem": "sock", 00:17:56.907 "config": [ 00:17:56.907 { 00:17:56.907 "method": "sock_set_default_impl", 00:17:56.907 "params": { 00:17:56.907 "impl_name": "posix" 00:17:56.907 } 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "method": "sock_impl_set_options", 00:17:56.907 "params": { 00:17:56.907 "impl_name": "ssl", 00:17:56.907 "recv_buf_size": 4096, 00:17:56.907 "send_buf_size": 4096, 00:17:56.907 "enable_recv_pipe": true, 00:17:56.907 "enable_quickack": false, 00:17:56.907 "enable_placement_id": 0, 00:17:56.907 "enable_zerocopy_send_server": true, 00:17:56.907 "enable_zerocopy_send_client": false, 00:17:56.907 "zerocopy_threshold": 0, 00:17:56.907 "tls_version": 0, 00:17:56.907 "enable_ktls": false 00:17:56.907 } 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "method": "sock_impl_set_options", 00:17:56.907 "params": { 00:17:56.907 "impl_name": "posix", 00:17:56.907 "recv_buf_size": 2097152, 00:17:56.907 "send_buf_size": 2097152, 00:17:56.907 "enable_recv_pipe": true, 00:17:56.907 "enable_quickack": false, 00:17:56.907 "enable_placement_id": 0, 00:17:56.907 "enable_zerocopy_send_server": true, 00:17:56.907 "enable_zerocopy_send_client": false, 00:17:56.907 "zerocopy_threshold": 0, 00:17:56.907 "tls_version": 0, 00:17:56.907 "enable_ktls": false 00:17:56.907 } 00:17:56.907 } 00:17:56.907 ] 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "subsystem": "vmd", 00:17:56.907 "config": [] 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "subsystem": "accel", 00:17:56.907 "config": [ 00:17:56.907 { 00:17:56.907 "method": "accel_set_options", 00:17:56.907 "params": { 00:17:56.907 "small_cache_size": 128, 00:17:56.907 "large_cache_size": 16, 00:17:56.907 "task_count": 2048, 00:17:56.907 "sequence_count": 2048, 00:17:56.907 "buf_count": 2048 00:17:56.907 } 00:17:56.907 } 00:17:56.907 ] 00:17:56.907 }, 00:17:56.907 { 00:17:56.907 "subsystem": "bdev", 00:17:56.907 "config": [ 00:17:56.907 { 00:17:56.907 "method": "bdev_set_options", 00:17:56.907 "params": { 00:17:56.907 "bdev_io_pool_size": 65535, 00:17:56.907 "bdev_io_cache_size": 256, 00:17:56.907 "bdev_auto_examine": true, 00:17:56.907 "iobuf_small_cache_size": 128, 00:17:56.907 "iobuf_large_cache_size": 16 00:17:56.907 } 00:17:56.907 }, 00:17:56.907 { 00:17:56.908 "method": "bdev_raid_set_options", 00:17:56.908 "params": { 00:17:56.908 "process_window_size_kb": 1024 00:17:56.908 } 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "method": "bdev_iscsi_set_options", 00:17:56.908 "params": { 00:17:56.908 "timeout_sec": 30 00:17:56.908 } 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "method": "bdev_nvme_set_options", 00:17:56.908 "params": { 00:17:56.908 "action_on_timeout": "none", 00:17:56.908 "timeout_us": 0, 00:17:56.908 "timeout_admin_us": 0, 00:17:56.908 "keep_alive_timeout_ms": 10000, 00:17:56.908 "arbitration_burst": 0, 00:17:56.908 "low_priority_weight": 0, 00:17:56.908 "medium_priority_weight": 0, 00:17:56.908 "high_priority_weight": 0, 00:17:56.908 "nvme_adminq_poll_period_us": 10000, 00:17:56.908 "nvme_ioq_poll_period_us": 0, 00:17:56.908 "io_queue_requests": 512, 00:17:56.908 "delay_cmd_submit": true, 00:17:56.908 "transport_retry_count": 4, 00:17:56.908 "bdev_retry_count": 3, 00:17:56.908 "transport_ack_timeout": 0, 00:17:56.908 "ctrlr_loss_timeout_sec": 0, 00:17:56.908 "reconnect_delay_sec": 0, 00:17:56.908 "fast_io_fail_timeout_sec": 0, 00:17:56.908 "disable_auto_failback": false, 00:17:56.908 "generate_uuids": false, 00:17:56.908 "transport_tos": 0, 00:17:56.908 "nvme_error_stat": false, 00:17:56.908 "rdma_srq_size": 0, 00:17:56.908 "io_path_stat": false, 00:17:56.908 "allow_accel_sequence": false, 00:17:56.908 "rdma_max_cq_size": 0, 00:17:56.908 "rdma_cm_event_timeout_ms": 0, 00:17:56.908 "dhchap_digests": [ 00:17:56.908 "sha256", 00:17:56.908 "sha384", 00:17:56.908 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.908 a512" 00:17:56.908 ], 00:17:56.908 "dhchap_dhgroups": [ 00:17:56.908 "null", 00:17:56.908 "ffdhe2048", 00:17:56.908 "ffdhe3072", 00:17:56.908 "ffdhe4096", 00:17:56.908 "ffdhe6144", 00:17:56.908 "ffdhe8192" 00:17:56.908 ] 00:17:56.908 } 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "method": "bdev_nvme_attach_controller", 00:17:56.908 "params": { 00:17:56.908 "name": "nvme0", 00:17:56.908 "trtype": "TCP", 00:17:56.908 "adrfam": "IPv4", 00:17:56.908 "traddr": "10.0.0.2", 00:17:56.908 "trsvcid": "4420", 00:17:56.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.908 "prchk_reftag": false, 00:17:56.908 "prchk_guard": false, 00:17:56.908 "ctrlr_loss_timeout_sec": 0, 00:17:56.908 "reconnect_delay_sec": 0, 00:17:56.908 "fast_io_fail_timeout_sec": 0, 00:17:56.908 "psk": "key0", 00:17:56.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.908 "hdgst": false, 00:17:56.908 "ddgst": false 00:17:56.908 } 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "method": "bdev_nvme_set_hotplug", 00:17:56.908 "params": { 00:17:56.908 "period_us": 100000, 00:17:56.908 "enable": false 00:17:56.908 } 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "method": "bdev_enable_histogram", 00:17:56.908 "params": { 00:17:56.908 "name": "nvme0n1", 00:17:56.908 "enable": true 00:17:56.908 } 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "method": "bdev_wait_for_examine" 00:17:56.908 } 00:17:56.908 ] 00:17:56.908 }, 00:17:56.908 { 00:17:56.908 "subsystem": "nbd", 00:17:56.908 "config": [] 00:17:56.908 } 00:17:56.908 ] 00:17:56.908 }' 00:17:56.908 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.908 10:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.908 [2024-07-15 10:34:45.326685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:56.908 [2024-07-15 10:34:45.326761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227169 ] 00:17:56.908 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.908 [2024-07-15 10:34:45.384209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.168 [2024-07-15 10:34:45.491857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.168 [2024-07-15 10:34:45.670249] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.798 10:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.798 10:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.798 10:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:57.798 10:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:58.055 10:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.055 10:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.312 Running I/O for 1 seconds... 00:17:59.244 00:17:59.244 Latency(us) 00:17:59.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.244 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:59.244 Verification LBA range: start 0x0 length 0x2000 00:17:59.244 nvme0n1 : 1.03 3529.19 13.79 0.00 0.00 35765.35 9272.13 31651.46 00:17:59.244 =================================================================================================================== 00:17:59.244 Total : 3529.19 13.79 0.00 0.00 35765.35 9272.13 31651.46 00:17:59.244 0 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:59.244 nvmf_trace.0 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1227169 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1227169 ']' 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1227169 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:59.244 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227169 00:17:59.502 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:59.502 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:59.502 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227169' 00:17:59.502 killing process with pid 1227169 00:17:59.502 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1227169 00:17:59.502 Received shutdown signal, test time was about 1.000000 seconds 00:17:59.502 00:17:59.502 Latency(us) 00:17:59.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.502 =================================================================================================================== 00:17:59.502 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.502 10:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1227169 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:59.759 rmmod nvme_tcp 00:17:59.759 rmmod nvme_fabrics 00:17:59.759 rmmod nvme_keyring 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1227018 ']' 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1227018 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1227018 ']' 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1227018 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:59.759 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:59.760 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227018 00:17:59.760 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:59.760 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:59.760 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227018' 00:17:59.760 killing process with pid 1227018 00:17:59.760 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1227018 00:17:59.760 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1227018 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.038 10:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.942 10:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:01.942 10:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.eEfGYfgxIe /tmp/tmp.V7EwLmJ7YP /tmp/tmp.kuFUBOcwav 00:18:01.942 00:18:01.942 real 1m19.930s 00:18:01.942 user 2m11.473s 00:18:01.942 sys 0m24.023s 00:18:01.942 10:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:01.942 10:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.942 ************************************ 00:18:01.942 END TEST nvmf_tls 00:18:01.942 ************************************ 00:18:01.942 10:34:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:01.942 10:34:50 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.942 10:34:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:01.942 10:34:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.942 10:34:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:02.203 ************************************ 00:18:02.203 START TEST nvmf_fips 00:18:02.203 ************************************ 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:02.204 * Looking for test storage... 00:18:02.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:02.204 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:02.205 Error setting digest 00:18:02.205 00A2F388107F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:02.205 00A2F388107F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:02.205 10:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:04.739 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:04.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:04.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:04.740 Found net devices under 0000:09:00.0: cvl_0_0 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:04.740 Found net devices under 0000:09:00.1: cvl_0_1 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:04.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:18:04.740 00:18:04.740 --- 10.0.0.2 ping statistics --- 00:18:04.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.740 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:18:04.740 00:18:04.740 --- 10.0.0.1 ping statistics --- 00:18:04.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.740 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1229495 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1229495 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1229495 ']' 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.740 10:34:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:04.740 [2024-07-15 10:34:52.946035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:04.740 [2024-07-15 10:34:52.946119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.740 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.740 [2024-07-15 10:34:53.004878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.740 [2024-07-15 10:34:53.105143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.740 [2024-07-15 10:34:53.105206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.740 [2024-07-15 10:34:53.105220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.740 [2024-07-15 10:34:53.105230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.740 [2024-07-15 10:34:53.105239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.740 [2024-07-15 10:34:53.105269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.673 10:34:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.673 [2024-07-15 10:34:54.144296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.673 [2024-07-15 10:34:54.160284] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.673 [2024-07-15 10:34:54.160475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.673 [2024-07-15 10:34:54.191210] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:05.673 malloc0 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1229684 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1229684 /var/tmp/bdevperf.sock 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1229684 ']' 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.673 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:05.932 [2024-07-15 10:34:54.281355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:05.932 [2024-07-15 10:34:54.281422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229684 ] 00:18:05.932 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.932 [2024-07-15 10:34:54.337950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.932 [2024-07-15 10:34:54.442978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.190 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.190 10:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:06.190 10:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:06.448 [2024-07-15 10:34:54.820376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.448 [2024-07-15 10:34:54.820480] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.448 TLSTESTn1 00:18:06.448 10:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.706 Running I/O for 10 seconds... 00:18:16.667 00:18:16.667 Latency(us) 00:18:16.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.667 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:16.667 Verification LBA range: start 0x0 length 0x2000 00:18:16.667 TLSTESTn1 : 10.02 3583.25 14.00 0.00 0.00 35661.12 7912.87 36894.34 00:18:16.667 =================================================================================================================== 00:18:16.667 Total : 3583.25 14.00 0.00 0.00 35661.12 7912.87 36894.34 00:18:16.667 0 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:16.667 nvmf_trace.0 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1229684 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1229684 ']' 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1229684 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229684 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229684' 00:18:16.667 killing process with pid 1229684 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1229684 00:18:16.667 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.667 00:18:16.667 Latency(us) 00:18:16.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.667 =================================================================================================================== 00:18:16.667 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.667 [2024-07-15 10:35:05.195398] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.667 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1229684 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.925 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.925 rmmod nvme_tcp 00:18:16.925 rmmod nvme_fabrics 00:18:17.182 rmmod nvme_keyring 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1229495 ']' 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1229495 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1229495 ']' 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1229495 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229495 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229495' 00:18:17.182 killing process with pid 1229495 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1229495 00:18:17.182 [2024-07-15 10:35:05.521965] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:17.182 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1229495 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.441 10:35:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.345 10:35:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.345 10:35:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:19.345 00:18:19.345 real 0m17.325s 00:18:19.345 user 0m22.872s 00:18:19.345 sys 0m5.188s 00:18:19.345 10:35:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.345 10:35:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:19.345 ************************************ 00:18:19.345 END TEST nvmf_fips 00:18:19.345 ************************************ 00:18:19.345 10:35:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:19.345 10:35:07 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:19.345 10:35:07 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:19.345 10:35:07 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:19.345 10:35:07 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:19.345 10:35:07 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.345 10:35:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:21.877 10:35:09 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:21.878 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:21.878 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:21.878 Found net devices under 0000:09:00.0: cvl_0_0 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:21.878 Found net devices under 0000:09:00.1: cvl_0_1 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:21.878 10:35:09 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:21.878 10:35:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:21.878 10:35:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.878 10:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.878 ************************************ 00:18:21.878 START TEST nvmf_perf_adq 00:18:21.878 ************************************ 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:21.878 * Looking for test storage... 00:18:21.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:21.878 10:35:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:23.778 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:23.778 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:23.778 Found net devices under 0000:09:00.0: cvl_0_0 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.778 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:23.779 Found net devices under 0000:09:00.1: cvl_0_1 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:23.779 10:35:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:24.346 10:35:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:26.244 10:35:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:31.518 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:31.518 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:31.518 Found net devices under 0000:09:00.0: cvl_0_0 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:31.518 Found net devices under 0000:09:00.1: cvl_0_1 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:18:31.518 00:18:31.518 --- 10.0.0.2 ping statistics --- 00:18:31.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.518 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:31.518 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:31.518 00:18:31.518 --- 10.0.0.1 ping statistics --- 00:18:31.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.519 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1235436 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1235436 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1235436 ']' 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 [2024-07-15 10:35:19.775407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:31.519 [2024-07-15 10:35:19.775482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.519 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.519 [2024-07-15 10:35:19.838743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.519 [2024-07-15 10:35:19.946547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.519 [2024-07-15 10:35:19.946611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.519 [2024-07-15 10:35:19.946624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.519 [2024-07-15 10:35:19.946636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.519 [2024-07-15 10:35:19.946645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.519 [2024-07-15 10:35:19.946761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.519 [2024-07-15 10:35:19.946827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.519 [2024-07-15 10:35:19.946893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.519 [2024-07-15 10:35:19.946896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.519 10:35:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.519 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 [2024-07-15 10:35:20.169538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 Malloc1 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 [2024-07-15 10:35:20.220649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1235528 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:31.832 10:35:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:31.832 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:33.763 "tick_rate": 2700000000, 00:18:33.763 "poll_groups": [ 00:18:33.763 { 00:18:33.763 "name": "nvmf_tgt_poll_group_000", 00:18:33.763 "admin_qpairs": 1, 00:18:33.763 "io_qpairs": 1, 00:18:33.763 "current_admin_qpairs": 1, 00:18:33.763 "current_io_qpairs": 1, 00:18:33.763 "pending_bdev_io": 0, 00:18:33.763 "completed_nvme_io": 20232, 00:18:33.763 "transports": [ 00:18:33.763 { 00:18:33.763 "trtype": "TCP" 00:18:33.763 } 00:18:33.763 ] 00:18:33.763 }, 00:18:33.763 { 00:18:33.763 "name": "nvmf_tgt_poll_group_001", 00:18:33.763 "admin_qpairs": 0, 00:18:33.763 "io_qpairs": 1, 00:18:33.763 "current_admin_qpairs": 0, 00:18:33.763 "current_io_qpairs": 1, 00:18:33.763 "pending_bdev_io": 0, 00:18:33.763 "completed_nvme_io": 20395, 00:18:33.763 "transports": [ 00:18:33.763 { 00:18:33.763 "trtype": "TCP" 00:18:33.763 } 00:18:33.763 ] 00:18:33.763 }, 00:18:33.763 { 00:18:33.763 "name": "nvmf_tgt_poll_group_002", 00:18:33.763 "admin_qpairs": 0, 00:18:33.763 "io_qpairs": 1, 00:18:33.763 "current_admin_qpairs": 0, 00:18:33.763 "current_io_qpairs": 1, 00:18:33.763 "pending_bdev_io": 0, 00:18:33.763 "completed_nvme_io": 20091, 00:18:33.763 "transports": [ 00:18:33.763 { 00:18:33.763 "trtype": "TCP" 00:18:33.763 } 00:18:33.763 ] 00:18:33.763 }, 00:18:33.763 { 00:18:33.763 "name": "nvmf_tgt_poll_group_003", 00:18:33.763 "admin_qpairs": 0, 00:18:33.763 "io_qpairs": 1, 00:18:33.763 "current_admin_qpairs": 0, 00:18:33.763 "current_io_qpairs": 1, 00:18:33.763 "pending_bdev_io": 0, 00:18:33.763 "completed_nvme_io": 20514, 00:18:33.763 "transports": [ 00:18:33.763 { 00:18:33.763 "trtype": "TCP" 00:18:33.763 } 00:18:33.763 ] 00:18:33.763 } 00:18:33.763 ] 00:18:33.763 }' 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:33.763 10:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1235528 00:18:41.869 Initializing NVMe Controllers 00:18:41.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:41.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:41.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:41.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:41.869 Initialization complete. Launching workers. 00:18:41.869 ======================================================== 00:18:41.869 Latency(us) 00:18:41.869 Device Information : IOPS MiB/s Average min max 00:18:41.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10706.20 41.82 5978.14 1808.79 12258.72 00:18:41.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10648.00 41.59 6010.44 2511.81 10133.47 00:18:41.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10615.90 41.47 6028.41 2295.20 10253.91 00:18:41.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10710.20 41.84 5975.71 2460.47 9762.71 00:18:41.869 ======================================================== 00:18:41.869 Total : 42680.29 166.72 5998.09 1808.79 12258.72 00:18:41.869 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:41.869 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:41.869 rmmod nvme_tcp 00:18:41.869 rmmod nvme_fabrics 00:18:41.869 rmmod nvme_keyring 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1235436 ']' 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1235436 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1235436 ']' 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1235436 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235436 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235436' 00:18:42.127 killing process with pid 1235436 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1235436 00:18:42.127 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1235436 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.386 10:35:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.287 10:35:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.287 10:35:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:18:44.287 10:35:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:45.222 10:35:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:47.120 10:35:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:52.395 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:52.395 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.395 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:52.396 Found net devices under 0000:09:00.0: cvl_0_0 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:52.396 Found net devices under 0000:09:00.1: cvl_0_1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:52.396 00:18:52.396 --- 10.0.0.2 ping statistics --- 00:18:52.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.396 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:52.396 00:18:52.396 --- 10.0.0.1 ping statistics --- 00:18:52.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.396 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:52.396 net.core.busy_poll = 1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:52.396 net.core.busy_read = 1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1238219 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1238219 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1238219 ']' 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.396 10:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.396 [2024-07-15 10:35:40.831261] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:52.396 [2024-07-15 10:35:40.831350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.396 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.396 [2024-07-15 10:35:40.912913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.654 [2024-07-15 10:35:41.050359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.654 [2024-07-15 10:35:41.050420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.654 [2024-07-15 10:35:41.050460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.654 [2024-07-15 10:35:41.050484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.654 [2024-07-15 10:35:41.050505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.654 [2024-07-15 10:35:41.050600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.654 [2024-07-15 10:35:41.050667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.654 [2024-07-15 10:35:41.050744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.654 [2024-07-15 10:35:41.050734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.654 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.654 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:52.654 10:35:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.655 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.913 [2024-07-15 10:35:41.315745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.913 Malloc1 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:52.913 [2024-07-15 10:35:41.368748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1238251 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:52.913 10:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:52.913 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:18:55.441 "tick_rate": 2700000000, 00:18:55.441 "poll_groups": [ 00:18:55.441 { 00:18:55.441 "name": "nvmf_tgt_poll_group_000", 00:18:55.441 "admin_qpairs": 1, 00:18:55.441 "io_qpairs": 1, 00:18:55.441 "current_admin_qpairs": 1, 00:18:55.441 "current_io_qpairs": 1, 00:18:55.441 "pending_bdev_io": 0, 00:18:55.441 "completed_nvme_io": 26337, 00:18:55.441 "transports": [ 00:18:55.441 { 00:18:55.441 "trtype": "TCP" 00:18:55.441 } 00:18:55.441 ] 00:18:55.441 }, 00:18:55.441 { 00:18:55.441 "name": "nvmf_tgt_poll_group_001", 00:18:55.441 "admin_qpairs": 0, 00:18:55.441 "io_qpairs": 3, 00:18:55.441 "current_admin_qpairs": 0, 00:18:55.441 "current_io_qpairs": 3, 00:18:55.441 "pending_bdev_io": 0, 00:18:55.441 "completed_nvme_io": 25230, 00:18:55.441 "transports": [ 00:18:55.441 { 00:18:55.441 "trtype": "TCP" 00:18:55.441 } 00:18:55.441 ] 00:18:55.441 }, 00:18:55.441 { 00:18:55.441 "name": "nvmf_tgt_poll_group_002", 00:18:55.441 "admin_qpairs": 0, 00:18:55.441 "io_qpairs": 0, 00:18:55.441 "current_admin_qpairs": 0, 00:18:55.441 "current_io_qpairs": 0, 00:18:55.441 "pending_bdev_io": 0, 00:18:55.441 "completed_nvme_io": 0, 00:18:55.441 "transports": [ 00:18:55.441 { 00:18:55.441 "trtype": "TCP" 00:18:55.441 } 00:18:55.441 ] 00:18:55.441 }, 00:18:55.441 { 00:18:55.441 "name": "nvmf_tgt_poll_group_003", 00:18:55.441 "admin_qpairs": 0, 00:18:55.441 "io_qpairs": 0, 00:18:55.441 "current_admin_qpairs": 0, 00:18:55.441 "current_io_qpairs": 0, 00:18:55.441 "pending_bdev_io": 0, 00:18:55.441 "completed_nvme_io": 0, 00:18:55.441 "transports": [ 00:18:55.441 { 00:18:55.441 "trtype": "TCP" 00:18:55.441 } 00:18:55.441 ] 00:18:55.441 } 00:18:55.441 ] 00:18:55.441 }' 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:18:55.441 10:35:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1238251 00:19:03.549 Initializing NVMe Controllers 00:19:03.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:03.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:03.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:03.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:03.549 Initialization complete. Launching workers. 00:19:03.549 ======================================================== 00:19:03.549 Latency(us) 00:19:03.549 Device Information : IOPS MiB/s Average min max 00:19:03.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4171.20 16.29 15350.35 2521.94 63554.35 00:19:03.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13993.60 54.66 4573.60 1854.24 6724.48 00:19:03.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4444.30 17.36 14406.96 1907.83 62065.66 00:19:03.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4604.00 17.98 13923.20 2317.46 61033.47 00:19:03.549 ======================================================== 00:19:03.549 Total : 27213.10 106.30 9413.18 1854.24 63554.35 00:19:03.549 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.549 rmmod nvme_tcp 00:19:03.549 rmmod nvme_fabrics 00:19:03.549 rmmod nvme_keyring 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1238219 ']' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1238219 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1238219 ']' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1238219 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1238219 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1238219' 00:19:03.549 killing process with pid 1238219 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1238219 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1238219 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.549 10:35:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.847 10:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.847 10:35:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:06.847 00:19:06.847 real 0m45.086s 00:19:06.847 user 2m38.293s 00:19:06.847 sys 0m10.279s 00:19:06.847 10:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.847 10:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 ************************************ 00:19:06.847 END TEST nvmf_perf_adq 00:19:06.847 ************************************ 00:19:06.847 10:35:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.847 10:35:55 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:06.847 10:35:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.847 10:35:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.847 10:35:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 ************************************ 00:19:06.847 START TEST nvmf_shutdown 00:19:06.847 ************************************ 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:06.847 * Looking for test storage... 00:19:06.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 ************************************ 00:19:06.847 START TEST nvmf_shutdown_tc1 00:19:06.847 ************************************ 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.847 10:35:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:08.746 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:08.746 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.746 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:08.747 Found net devices under 0000:09:00.0: cvl_0_0 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:08.747 Found net devices under 0000:09:00.1: cvl_0_1 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.747 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:09.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:19:09.005 00:19:09.005 --- 10.0.0.2 ping statistics --- 00:19:09.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.005 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:09.005 00:19:09.005 --- 10.0.0.1 ping statistics --- 00:19:09.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.005 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.005 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1241542 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1241542 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1241542 ']' 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.006 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.006 [2024-07-15 10:35:57.464155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:09.006 [2024-07-15 10:35:57.464233] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.006 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.006 [2024-07-15 10:35:57.524841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.264 [2024-07-15 10:35:57.629960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.264 [2024-07-15 10:35:57.630010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.264 [2024-07-15 10:35:57.630040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.264 [2024-07-15 10:35:57.630052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.264 [2024-07-15 10:35:57.630062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.264 [2024-07-15 10:35:57.630113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.264 [2024-07-15 10:35:57.630174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.264 [2024-07-15 10:35:57.630242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:09.264 [2024-07-15 10:35:57.630244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.264 [2024-07-15 10:35:57.766390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.264 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.265 10:35:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.523 Malloc1 00:19:09.523 [2024-07-15 10:35:57.841604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.523 Malloc2 00:19:09.523 Malloc3 00:19:09.523 Malloc4 00:19:09.523 Malloc5 00:19:09.523 Malloc6 00:19:09.782 Malloc7 00:19:09.782 Malloc8 00:19:09.782 Malloc9 00:19:09.782 Malloc10 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1241722 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1241722 /var/tmp/bdevperf.sock 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1241722 ']' 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.782 { 00:19:09.782 "params": { 00:19:09.782 "name": "Nvme$subsystem", 00:19:09.782 "trtype": "$TEST_TRANSPORT", 00:19:09.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.782 "adrfam": "ipv4", 00:19:09.782 "trsvcid": "$NVMF_PORT", 00:19:09.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.782 "hdgst": ${hdgst:-false}, 00:19:09.782 "ddgst": ${ddgst:-false} 00:19:09.782 }, 00:19:09.782 "method": "bdev_nvme_attach_controller" 00:19:09.782 } 00:19:09.782 EOF 00:19:09.782 )") 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.782 { 00:19:09.782 "params": { 00:19:09.782 "name": "Nvme$subsystem", 00:19:09.782 "trtype": "$TEST_TRANSPORT", 00:19:09.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.782 "adrfam": "ipv4", 00:19:09.782 "trsvcid": "$NVMF_PORT", 00:19:09.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.782 "hdgst": ${hdgst:-false}, 00:19:09.782 "ddgst": ${ddgst:-false} 00:19:09.782 }, 00:19:09.782 "method": "bdev_nvme_attach_controller" 00:19:09.782 } 00:19:09.782 EOF 00:19:09.782 )") 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.782 { 00:19:09.782 "params": { 00:19:09.782 "name": "Nvme$subsystem", 00:19:09.782 "trtype": "$TEST_TRANSPORT", 00:19:09.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.782 "adrfam": "ipv4", 00:19:09.782 "trsvcid": "$NVMF_PORT", 00:19:09.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.782 "hdgst": ${hdgst:-false}, 00:19:09.782 "ddgst": ${ddgst:-false} 00:19:09.782 }, 00:19:09.782 "method": "bdev_nvme_attach_controller" 00:19:09.782 } 00:19:09.782 EOF 00:19:09.782 )") 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.782 { 00:19:09.782 "params": { 00:19:09.782 "name": "Nvme$subsystem", 00:19:09.782 "trtype": "$TEST_TRANSPORT", 00:19:09.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.782 "adrfam": "ipv4", 00:19:09.782 "trsvcid": "$NVMF_PORT", 00:19:09.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.782 "hdgst": ${hdgst:-false}, 00:19:09.782 "ddgst": ${ddgst:-false} 00:19:09.782 }, 00:19:09.782 "method": "bdev_nvme_attach_controller" 00:19:09.782 } 00:19:09.782 EOF 00:19:09.782 )") 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.782 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.782 { 00:19:09.782 "params": { 00:19:09.782 "name": "Nvme$subsystem", 00:19:09.782 "trtype": "$TEST_TRANSPORT", 00:19:09.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "$NVMF_PORT", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.783 "hdgst": ${hdgst:-false}, 00:19:09.783 "ddgst": ${ddgst:-false} 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 } 00:19:09.783 EOF 00:19:09.783 )") 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.783 { 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme$subsystem", 00:19:09.783 "trtype": "$TEST_TRANSPORT", 00:19:09.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "$NVMF_PORT", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.783 "hdgst": ${hdgst:-false}, 00:19:09.783 "ddgst": ${ddgst:-false} 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 } 00:19:09.783 EOF 00:19:09.783 )") 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.783 { 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme$subsystem", 00:19:09.783 "trtype": "$TEST_TRANSPORT", 00:19:09.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "$NVMF_PORT", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.783 "hdgst": ${hdgst:-false}, 00:19:09.783 "ddgst": ${ddgst:-false} 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 } 00:19:09.783 EOF 00:19:09.783 )") 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.783 { 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme$subsystem", 00:19:09.783 "trtype": "$TEST_TRANSPORT", 00:19:09.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "$NVMF_PORT", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.783 "hdgst": ${hdgst:-false}, 00:19:09.783 "ddgst": ${ddgst:-false} 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 } 00:19:09.783 EOF 00:19:09.783 )") 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.783 { 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme$subsystem", 00:19:09.783 "trtype": "$TEST_TRANSPORT", 00:19:09.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "$NVMF_PORT", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.783 "hdgst": ${hdgst:-false}, 00:19:09.783 "ddgst": ${ddgst:-false} 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 } 00:19:09.783 EOF 00:19:09.783 )") 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.783 { 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme$subsystem", 00:19:09.783 "trtype": "$TEST_TRANSPORT", 00:19:09.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "$NVMF_PORT", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.783 "hdgst": ${hdgst:-false}, 00:19:09.783 "ddgst": ${ddgst:-false} 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 } 00:19:09.783 EOF 00:19:09.783 )") 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:09.783 10:35:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme1", 00:19:09.783 "trtype": "tcp", 00:19:09.783 "traddr": "10.0.0.2", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "4420", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.783 "hdgst": false, 00:19:09.783 "ddgst": false 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 },{ 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme2", 00:19:09.783 "trtype": "tcp", 00:19:09.783 "traddr": "10.0.0.2", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "4420", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:09.783 "hdgst": false, 00:19:09.783 "ddgst": false 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 },{ 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme3", 00:19:09.783 "trtype": "tcp", 00:19:09.783 "traddr": "10.0.0.2", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "4420", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:09.783 "hdgst": false, 00:19:09.783 "ddgst": false 00:19:09.783 }, 00:19:09.783 "method": "bdev_nvme_attach_controller" 00:19:09.783 },{ 00:19:09.783 "params": { 00:19:09.783 "name": "Nvme4", 00:19:09.783 "trtype": "tcp", 00:19:09.783 "traddr": "10.0.0.2", 00:19:09.783 "adrfam": "ipv4", 00:19:09.783 "trsvcid": "4420", 00:19:09.783 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:09.783 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 },{ 00:19:09.784 "params": { 00:19:09.784 "name": "Nvme5", 00:19:09.784 "trtype": "tcp", 00:19:09.784 "traddr": "10.0.0.2", 00:19:09.784 "adrfam": "ipv4", 00:19:09.784 "trsvcid": "4420", 00:19:09.784 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:09.784 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 },{ 00:19:09.784 "params": { 00:19:09.784 "name": "Nvme6", 00:19:09.784 "trtype": "tcp", 00:19:09.784 "traddr": "10.0.0.2", 00:19:09.784 "adrfam": "ipv4", 00:19:09.784 "trsvcid": "4420", 00:19:09.784 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:09.784 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 },{ 00:19:09.784 "params": { 00:19:09.784 "name": "Nvme7", 00:19:09.784 "trtype": "tcp", 00:19:09.784 "traddr": "10.0.0.2", 00:19:09.784 "adrfam": "ipv4", 00:19:09.784 "trsvcid": "4420", 00:19:09.784 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:09.784 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 },{ 00:19:09.784 "params": { 00:19:09.784 "name": "Nvme8", 00:19:09.784 "trtype": "tcp", 00:19:09.784 "traddr": "10.0.0.2", 00:19:09.784 "adrfam": "ipv4", 00:19:09.784 "trsvcid": "4420", 00:19:09.784 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:09.784 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 },{ 00:19:09.784 "params": { 00:19:09.784 "name": "Nvme9", 00:19:09.784 "trtype": "tcp", 00:19:09.784 "traddr": "10.0.0.2", 00:19:09.784 "adrfam": "ipv4", 00:19:09.784 "trsvcid": "4420", 00:19:09.784 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:09.784 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 },{ 00:19:09.784 "params": { 00:19:09.784 "name": "Nvme10", 00:19:09.784 "trtype": "tcp", 00:19:09.784 "traddr": "10.0.0.2", 00:19:09.784 "adrfam": "ipv4", 00:19:09.784 "trsvcid": "4420", 00:19:09.784 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:09.784 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:09.784 "hdgst": false, 00:19:09.784 "ddgst": false 00:19:09.784 }, 00:19:09.784 "method": "bdev_nvme_attach_controller" 00:19:09.784 }' 00:19:09.784 [2024-07-15 10:35:58.324495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:09.784 [2024-07-15 10:35:58.324572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:10.043 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.043 [2024-07-15 10:35:58.388769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.043 [2024-07-15 10:35:58.498556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1241722 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:12.011 10:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:12.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1241722 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1241542 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.944 { 00:19:12.944 "params": { 00:19:12.944 "name": "Nvme$subsystem", 00:19:12.944 "trtype": "$TEST_TRANSPORT", 00:19:12.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.944 "adrfam": "ipv4", 00:19:12.944 "trsvcid": "$NVMF_PORT", 00:19:12.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.944 "hdgst": ${hdgst:-false}, 00:19:12.944 "ddgst": ${ddgst:-false} 00:19:12.944 }, 00:19:12.944 "method": "bdev_nvme_attach_controller" 00:19:12.944 } 00:19:12.944 EOF 00:19:12.944 )") 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.944 { 00:19:12.944 "params": { 00:19:12.944 "name": "Nvme$subsystem", 00:19:12.944 "trtype": "$TEST_TRANSPORT", 00:19:12.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.944 "adrfam": "ipv4", 00:19:12.944 "trsvcid": "$NVMF_PORT", 00:19:12.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.944 "hdgst": ${hdgst:-false}, 00:19:12.944 "ddgst": ${ddgst:-false} 00:19:12.944 }, 00:19:12.944 "method": "bdev_nvme_attach_controller" 00:19:12.944 } 00:19:12.944 EOF 00:19:12.944 )") 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.944 { 00:19:12.944 "params": { 00:19:12.944 "name": "Nvme$subsystem", 00:19:12.944 "trtype": "$TEST_TRANSPORT", 00:19:12.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.944 "adrfam": "ipv4", 00:19:12.944 "trsvcid": "$NVMF_PORT", 00:19:12.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.944 "hdgst": ${hdgst:-false}, 00:19:12.944 "ddgst": ${ddgst:-false} 00:19:12.944 }, 00:19:12.944 "method": "bdev_nvme_attach_controller" 00:19:12.944 } 00:19:12.944 EOF 00:19:12.944 )") 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.944 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.945 { 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme$subsystem", 00:19:12.945 "trtype": "$TEST_TRANSPORT", 00:19:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "$NVMF_PORT", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.945 "hdgst": ${hdgst:-false}, 00:19:12.945 "ddgst": ${ddgst:-false} 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 } 00:19:12.945 EOF 00:19:12.945 )") 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:12.945 10:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme1", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 },{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme2", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 },{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme3", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 },{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme4", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 },{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme5", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 },{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme6", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.945 "method": "bdev_nvme_attach_controller" 00:19:12.945 },{ 00:19:12.945 "params": { 00:19:12.945 "name": "Nvme7", 00:19:12.945 "trtype": "tcp", 00:19:12.945 "traddr": "10.0.0.2", 00:19:12.945 "adrfam": "ipv4", 00:19:12.945 "trsvcid": "4420", 00:19:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:12.945 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:12.945 "hdgst": false, 00:19:12.945 "ddgst": false 00:19:12.945 }, 00:19:12.946 "method": "bdev_nvme_attach_controller" 00:19:12.946 },{ 00:19:12.946 "params": { 00:19:12.946 "name": "Nvme8", 00:19:12.946 "trtype": "tcp", 00:19:12.946 "traddr": "10.0.0.2", 00:19:12.946 "adrfam": "ipv4", 00:19:12.946 "trsvcid": "4420", 00:19:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:12.946 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:12.946 "hdgst": false, 00:19:12.946 "ddgst": false 00:19:12.946 }, 00:19:12.946 "method": "bdev_nvme_attach_controller" 00:19:12.946 },{ 00:19:12.946 "params": { 00:19:12.946 "name": "Nvme9", 00:19:12.946 "trtype": "tcp", 00:19:12.946 "traddr": "10.0.0.2", 00:19:12.946 "adrfam": "ipv4", 00:19:12.946 "trsvcid": "4420", 00:19:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:12.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:12.946 "hdgst": false, 00:19:12.946 "ddgst": false 00:19:12.946 }, 00:19:12.946 "method": "bdev_nvme_attach_controller" 00:19:12.946 },{ 00:19:12.946 "params": { 00:19:12.946 "name": "Nvme10", 00:19:12.946 "trtype": "tcp", 00:19:12.946 "traddr": "10.0.0.2", 00:19:12.946 "adrfam": "ipv4", 00:19:12.946 "trsvcid": "4420", 00:19:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:12.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:12.946 "hdgst": false, 00:19:12.946 "ddgst": false 00:19:12.946 }, 00:19:12.946 "method": "bdev_nvme_attach_controller" 00:19:12.946 }' 00:19:12.946 [2024-07-15 10:36:01.343888] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:12.946 [2024-07-15 10:36:01.343972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242225 ] 00:19:12.946 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.946 [2024-07-15 10:36:01.410383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.203 [2024-07-15 10:36:01.521673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.572 Running I/O for 1 seconds... 00:19:15.942 00:19:15.942 Latency(us) 00:19:15.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme1n1 : 1.15 221.88 13.87 0.00 0.00 285659.97 18155.90 270299.59 00:19:15.942 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme2n1 : 1.15 227.21 14.20 0.00 0.00 274120.20 3179.71 251658.24 00:19:15.942 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme3n1 : 1.12 228.25 14.27 0.00 0.00 268409.55 16117.00 253211.69 00:19:15.942 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme4n1 : 1.13 227.34 14.21 0.00 0.00 264593.83 15825.73 271853.04 00:19:15.942 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme5n1 : 1.17 219.02 13.69 0.00 0.00 269635.22 11019.76 287387.50 00:19:15.942 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme6n1 : 1.17 219.48 13.72 0.00 0.00 266107.07 22524.97 273406.48 00:19:15.942 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme7n1 : 1.14 224.53 14.03 0.00 0.00 255144.01 20874.43 271853.04 00:19:15.942 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme8n1 : 1.14 225.32 14.08 0.00 0.00 249761.94 20000.62 254765.13 00:19:15.942 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme9n1 : 1.16 220.45 13.78 0.00 0.00 251511.28 20777.34 270299.59 00:19:15.942 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:15.942 Verification LBA range: start 0x0 length 0x400 00:19:15.942 Nvme10n1 : 1.17 218.03 13.63 0.00 0.00 250429.63 20971.52 298261.62 00:19:15.942 =================================================================================================================== 00:19:15.942 Total : 2231.53 139.47 0.00 0.00 263557.90 3179.71 298261.62 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.942 rmmod nvme_tcp 00:19:15.942 rmmod nvme_fabrics 00:19:15.942 rmmod nvme_keyring 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1241542 ']' 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1241542 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1241542 ']' 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1241542 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1241542 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1241542' 00:19:15.942 killing process with pid 1241542 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1241542 00:19:15.942 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1241542 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.508 10:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.040 10:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.040 00:19:19.040 real 0m11.855s 00:19:19.040 user 0m33.859s 00:19:19.040 sys 0m3.271s 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 ************************************ 00:19:19.040 END TEST nvmf_shutdown_tc1 00:19:19.040 ************************************ 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 ************************************ 00:19:19.040 START TEST nvmf_shutdown_tc2 00:19:19.040 ************************************ 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:19.040 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:19.040 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:19.040 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:19.041 Found net devices under 0000:09:00.0: cvl_0_0 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:19.041 Found net devices under 0000:09:00.1: cvl_0_1 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:19.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:19:19.041 00:19:19.041 --- 10.0.0.2 ping statistics --- 00:19:19.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.041 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:19:19.041 00:19:19.041 --- 10.0.0.1 ping statistics --- 00:19:19.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.041 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1243107 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1243107 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1243107 ']' 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.041 [2024-07-15 10:36:07.283865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:19.041 [2024-07-15 10:36:07.283942] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.041 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.041 [2024-07-15 10:36:07.346506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.041 [2024-07-15 10:36:07.449997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.041 [2024-07-15 10:36:07.450048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.041 [2024-07-15 10:36:07.450075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.041 [2024-07-15 10:36:07.450086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.041 [2024-07-15 10:36:07.450096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.041 [2024-07-15 10:36:07.450183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.041 [2024-07-15 10:36:07.450249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:19.041 [2024-07-15 10:36:07.450316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:19.041 [2024-07-15 10:36:07.450319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.041 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.299 [2024-07-15 10:36:07.599426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.299 10:36:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.299 Malloc1 00:19:19.299 [2024-07-15 10:36:07.673919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.299 Malloc2 00:19:19.299 Malloc3 00:19:19.299 Malloc4 00:19:19.299 Malloc5 00:19:19.557 Malloc6 00:19:19.557 Malloc7 00:19:19.557 Malloc8 00:19:19.557 Malloc9 00:19:19.557 Malloc10 00:19:19.557 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.557 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:19.557 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.814 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.814 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1243563 00:19:19.814 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1243563 /var/tmp/bdevperf.sock 00:19:19.814 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1243563 ']' 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.815 { 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme$subsystem", 00:19:19.815 "trtype": "$TEST_TRANSPORT", 00:19:19.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "$NVMF_PORT", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.815 "hdgst": ${hdgst:-false}, 00:19:19.815 "ddgst": ${ddgst:-false} 00:19:19.815 }, 00:19:19.815 "method": "bdev_nvme_attach_controller" 00:19:19.815 } 00:19:19.815 EOF 00:19:19.815 )") 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:19.815 10:36:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:19.815 "params": { 00:19:19.815 "name": "Nvme1", 00:19:19.815 "trtype": "tcp", 00:19:19.815 "traddr": "10.0.0.2", 00:19:19.815 "adrfam": "ipv4", 00:19:19.815 "trsvcid": "4420", 00:19:19.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.815 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme2", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme3", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme4", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme5", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme6", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme7", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme8", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme9", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 },{ 00:19:19.816 "params": { 00:19:19.816 "name": "Nvme10", 00:19:19.816 "trtype": "tcp", 00:19:19.816 "traddr": "10.0.0.2", 00:19:19.816 "adrfam": "ipv4", 00:19:19.816 "trsvcid": "4420", 00:19:19.816 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:19.816 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:19.816 "hdgst": false, 00:19:19.816 "ddgst": false 00:19:19.816 }, 00:19:19.816 "method": "bdev_nvme_attach_controller" 00:19:19.816 }' 00:19:19.816 [2024-07-15 10:36:08.167296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:19.816 [2024-07-15 10:36:08.167372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243563 ] 00:19:19.816 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.816 [2024-07-15 10:36:08.231302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.816 [2024-07-15 10:36:08.340971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.714 Running I/O for 10 seconds... 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:21.714 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:21.971 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:21.971 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:21.971 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:21.971 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:21.971 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.971 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:21.972 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.972 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:21.972 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:21.972 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1243563 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1243563 ']' 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1243563 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.230 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243563 00:19:22.487 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:22.487 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:22.487 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243563' 00:19:22.487 killing process with pid 1243563 00:19:22.487 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1243563 00:19:22.488 10:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1243563 00:19:22.488 Received shutdown signal, test time was about 0.979720 seconds 00:19:22.488 00:19:22.488 Latency(us) 00:19:22.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.488 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme1n1 : 0.93 205.77 12.86 0.00 0.00 307614.91 22233.69 257872.02 00:19:22.488 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme2n1 : 0.97 270.15 16.88 0.00 0.00 228387.85 5752.60 246997.90 00:19:22.488 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme3n1 : 0.95 272.51 17.03 0.00 0.00 222614.45 5461.33 254765.13 00:19:22.488 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme4n1 : 0.97 263.79 16.49 0.00 0.00 226717.01 16990.81 256318.58 00:19:22.488 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme5n1 : 0.95 202.65 12.67 0.00 0.00 288886.83 21942.42 256318.58 00:19:22.488 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme6n1 : 0.98 261.53 16.35 0.00 0.00 220099.13 21068.61 253211.69 00:19:22.488 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme7n1 : 0.97 262.66 16.42 0.00 0.00 213774.79 18544.26 254765.13 00:19:22.488 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme8n1 : 0.94 204.47 12.78 0.00 0.00 268347.42 18932.62 256318.58 00:19:22.488 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme9n1 : 0.96 199.26 12.45 0.00 0.00 270573.67 22330.79 285834.05 00:19:22.488 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.488 Verification LBA range: start 0x0 length 0x400 00:19:22.488 Nvme10n1 : 0.96 204.80 12.80 0.00 0.00 255791.89 4102.07 260978.92 00:19:22.488 =================================================================================================================== 00:19:22.488 Total : 2347.59 146.72 0.00 0.00 246221.33 4102.07 285834.05 00:19:22.745 10:36:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:23.675 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1243107 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.676 rmmod nvme_tcp 00:19:23.676 rmmod nvme_fabrics 00:19:23.676 rmmod nvme_keyring 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1243107 ']' 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1243107 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1243107 ']' 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1243107 00:19:23.676 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243107 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243107' 00:19:23.933 killing process with pid 1243107 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1243107 00:19:23.933 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1243107 00:19:24.498 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.498 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.498 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.498 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.498 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.499 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.499 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.499 10:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:26.399 00:19:26.399 real 0m7.760s 00:19:26.399 user 0m23.506s 00:19:26.399 sys 0m1.488s 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.399 ************************************ 00:19:26.399 END TEST nvmf_shutdown_tc2 00:19:26.399 ************************************ 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:26.399 ************************************ 00:19:26.399 START TEST nvmf_shutdown_tc3 00:19:26.399 ************************************ 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:26.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.399 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:26.400 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:26.400 Found net devices under 0000:09:00.0: cvl_0_0 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:26.400 Found net devices under 0000:09:00.1: cvl_0_1 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.400 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.658 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.658 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.658 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.658 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:19:26.658 00:19:26.658 --- 10.0.0.2 ping statistics --- 00:19:26.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.658 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:19:26.658 10:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:19:26.658 00:19:26.658 --- 10.0.0.1 ping statistics --- 00:19:26.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.658 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1244615 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1244615 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1244615 ']' 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.658 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:26.658 [2024-07-15 10:36:15.081953] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:26.658 [2024-07-15 10:36:15.082040] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.658 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.658 [2024-07-15 10:36:15.145769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.916 [2024-07-15 10:36:15.254876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.916 [2024-07-15 10:36:15.254923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.916 [2024-07-15 10:36:15.254952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.916 [2024-07-15 10:36:15.254963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.916 [2024-07-15 10:36:15.254973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.916 [2024-07-15 10:36:15.255060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.916 [2024-07-15 10:36:15.255124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.916 [2024-07-15 10:36:15.255171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.916 [2024-07-15 10:36:15.255173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:26.916 [2024-07-15 10:36:15.415735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.916 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:27.174 Malloc1 00:19:27.174 [2024-07-15 10:36:15.505308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.174 Malloc2 00:19:27.174 Malloc3 00:19:27.174 Malloc4 00:19:27.174 Malloc5 00:19:27.431 Malloc6 00:19:27.431 Malloc7 00:19:27.431 Malloc8 00:19:27.431 Malloc9 00:19:27.431 Malloc10 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1244721 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1244721 /var/tmp/bdevperf.sock 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1244721 ']' 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.431 { 00:19:27.431 "params": { 00:19:27.431 "name": "Nvme$subsystem", 00:19:27.431 "trtype": "$TEST_TRANSPORT", 00:19:27.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.431 "adrfam": "ipv4", 00:19:27.431 "trsvcid": "$NVMF_PORT", 00:19:27.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.431 "hdgst": ${hdgst:-false}, 00:19:27.431 "ddgst": ${ddgst:-false} 00:19:27.431 }, 00:19:27.431 "method": "bdev_nvme_attach_controller" 00:19:27.431 } 00:19:27.431 EOF 00:19:27.431 )") 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.431 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.432 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.432 { 00:19:27.432 "params": { 00:19:27.432 "name": "Nvme$subsystem", 00:19:27.432 "trtype": "$TEST_TRANSPORT", 00:19:27.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.432 "adrfam": "ipv4", 00:19:27.432 "trsvcid": "$NVMF_PORT", 00:19:27.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.432 "hdgst": ${hdgst:-false}, 00:19:27.432 "ddgst": ${ddgst:-false} 00:19:27.432 }, 00:19:27.432 "method": "bdev_nvme_attach_controller" 00:19:27.432 } 00:19:27.432 EOF 00:19:27.432 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.690 { 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme$subsystem", 00:19:27.690 "trtype": "$TEST_TRANSPORT", 00:19:27.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "$NVMF_PORT", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.690 "hdgst": ${hdgst:-false}, 00:19:27.690 "ddgst": ${ddgst:-false} 00:19:27.690 }, 00:19:27.690 "method": "bdev_nvme_attach_controller" 00:19:27.690 } 00:19:27.690 EOF 00:19:27.690 )") 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:27.690 10:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.690 "params": { 00:19:27.690 "name": "Nvme1", 00:19:27.690 "trtype": "tcp", 00:19:27.690 "traddr": "10.0.0.2", 00:19:27.690 "adrfam": "ipv4", 00:19:27.690 "trsvcid": "4420", 00:19:27.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.690 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme2", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme3", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme4", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme5", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme6", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme7", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme8", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme9", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 },{ 00:19:27.691 "params": { 00:19:27.691 "name": "Nvme10", 00:19:27.691 "trtype": "tcp", 00:19:27.691 "traddr": "10.0.0.2", 00:19:27.691 "adrfam": "ipv4", 00:19:27.691 "trsvcid": "4420", 00:19:27.691 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:27.691 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:27.691 "hdgst": false, 00:19:27.691 "ddgst": false 00:19:27.691 }, 00:19:27.691 "method": "bdev_nvme_attach_controller" 00:19:27.691 }' 00:19:27.691 [2024-07-15 10:36:16.019715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:27.691 [2024-07-15 10:36:16.019831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244721 ] 00:19:27.691 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.691 [2024-07-15 10:36:16.084347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.691 [2024-07-15 10:36:16.194317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.584 Running I/O for 10 seconds... 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:29.841 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:30.098 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1244615 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1244615 ']' 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1244615 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1244615 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1244615' 00:19:30.363 killing process with pid 1244615 00:19:30.363 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1244615 00:19:30.364 10:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1244615 00:19:30.364 [2024-07-15 10:36:18.858615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.858999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.859457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d440 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.860988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.861331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d8e0 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.364 [2024-07-15 10:36:18.862827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.862993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.863351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229dd80 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.864999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e240 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.865992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.365 [2024-07-15 10:36:18.866351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.866636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f940 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.874518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17990 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.874753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1d350 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.874954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.874974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.874988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef600 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.875131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76280 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.875291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855610 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.875456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75c60 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.875617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53830 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.875777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f240 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.875952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.875972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.875986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7f450 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.876119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.366 [2024-07-15 10:36:18.876221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17bb0 is same with the state(5) to be set 00:19:30.366 [2024-07-15 10:36:18.876646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.366 [2024-07-15 10:36:18.876670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.366 [2024-07-15 10:36:18.876713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.366 [2024-07-15 10:36:18.876744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.366 [2024-07-15 10:36:18.876760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.366 [2024-07-15 10:36:18.876774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.876821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.876851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.876882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.876911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.876942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.876987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.877980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.877994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878806] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea22f0 was disconnected and freed. reset controller. 00:19:30.367 [2024-07-15 10:36:18.878868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.878983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.878998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.879012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.879027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.879041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.879057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.879071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.879086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.879110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.879125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.879139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.367 [2024-07-15 10:36:18.879155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.367 [2024-07-15 10:36:18.879175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.879978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.879994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.880850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.880937] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea3780 was disconnected and freed. reset controller. 00:19:30.368 [2024-07-15 10:36:18.880998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.368 [2024-07-15 10:36:18.881436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.368 [2024-07-15 10:36:18.881450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.881986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.882986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.882999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883080] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd4eb50 was disconnected and freed. reset controller. 00:19:30.369 [2024-07-15 10:36:18.883622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.883972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.883987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.369 [2024-07-15 10:36:18.884001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.369 [2024-07-15 10:36:18.884017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.884977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.884995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.885435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.885449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.891756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.891809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.891830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.891844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.891861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.891875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.891891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.891905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.892749] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfb1fc0 was disconnected and freed. reset controller. 00:19:30.370 [2024-07-15 10:36:18.892898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17990 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.892933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1d350 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.892957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef600 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.892981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76280 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x855610 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75c60 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53830 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1f240 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7f450 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17bb0 (9): Bad file descriptor 00:19:30.370 [2024-07-15 10:36:18.893237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.370 [2024-07-15 10:36:18.893639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.370 [2024-07-15 10:36:18.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.893978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.893992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.894984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.894998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.371 [2024-07-15 10:36:18.895209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.371 [2024-07-15 10:36:18.895303] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde7d70 was disconnected and freed. reset controller. 00:19:30.371 [2024-07-15 10:36:18.901912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:30.371 [2024-07-15 10:36:18.901990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:30.371 [2024-07-15 10:36:18.902010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:30.371 [2024-07-15 10:36:18.902971] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:30.371 [2024-07-15 10:36:18.903059] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:30.371 [2024-07-15 10:36:18.903133] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:30.371 [2024-07-15 10:36:18.903208] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:30.371 [2024-07-15 10:36:18.903281] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:30.371 [2024-07-15 10:36:18.903605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.371 [2024-07-15 10:36:18.903637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:30.371 [2024-07-15 10:36:18.903812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.371 [2024-07-15 10:36:18.903846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1f240 with addr=10.0.0.2, port=4420 00:19:30.371 [2024-07-15 10:36:18.903866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f240 is same with the state(5) to be set 00:19:30.371 [2024-07-15 10:36:18.903977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.371 [2024-07-15 10:36:18.904006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7f450 with addr=10.0.0.2, port=4420 00:19:30.371 [2024-07-15 10:36:18.904022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7f450 is same with the state(5) to be set 00:19:30.371 [2024-07-15 10:36:18.904097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.371 [2024-07-15 10:36:18.904126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75c60 with addr=10.0.0.2, port=4420 00:19:30.371 [2024-07-15 10:36:18.904148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75c60 is same with the state(5) to be set 00:19:30.371 [2024-07-15 10:36:18.904671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.371 [2024-07-15 10:36:18.904700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd53830 with addr=10.0.0.2, port=4420 00:19:30.371 [2024-07-15 10:36:18.904717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53830 is same with the state(5) to be set 00:19:30.371 [2024-07-15 10:36:18.904812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.371 [2024-07-15 10:36:18.904844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef600 with addr=10.0.0.2, port=4420 00:19:30.372 [2024-07-15 10:36:18.904871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef600 is same with the state(5) to be set 00:19:30.372 [2024-07-15 10:36:18.904904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1f240 (9): Bad file descriptor 00:19:30.372 [2024-07-15 10:36:18.904927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7f450 (9): Bad file descriptor 00:19:30.372 [2024-07-15 10:36:18.904955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75c60 (9): Bad file descriptor 00:19:30.372 [2024-07-15 10:36:18.905098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.372 [2024-07-15 10:36:18.905124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.372 [2024-07-15 10:36:18.905153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.372 [2024-07-15 10:36:18.905168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.372 [2024-07-15 10:36:18.905184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.372 [2024-07-15 10:36:18.905198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.372 [2024-07-15 10:36:18.905214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.372 [2024-07-15 10:36:18.905229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.372 [2024-07-15 10:36:18.905245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.372 [2024-07-15 10:36:18.905259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.372 [2024-07-15 10:36:18.905275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.638 [2024-07-15 10:36:18.905533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.638 [2024-07-15 10:36:18.905548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.905966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.905995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.906962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.906986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.639 [2024-07-15 10:36:18.907296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.639 [2024-07-15 10:36:18.907310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.907548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.907562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfab9f0 is same with the state(5) to be set 00:19:30.640 [2024-07-15 10:36:18.908850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.908874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.908895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.908911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.908927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.908941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.908956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.908970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.908985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.908999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.640 [2024-07-15 10:36:18.909858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.640 [2024-07-15 10:36:18.909873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.909889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.909902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.909918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.909936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.909953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.909967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.909982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.909996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.910773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.910787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacec0 is same with the state(5) to be set 00:19:30.641 [2024-07-15 10:36:18.912021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.641 [2024-07-15 10:36:18.912362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.641 [2024-07-15 10:36:18.912376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.912975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.912989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.642 [2024-07-15 10:36:18.913491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.642 [2024-07-15 10:36:18.913506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.913971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.913985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae390 is same with the state(5) to be set 00:19:30.643 [2024-07-15 10:36:18.915227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.915973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.915987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.643 [2024-07-15 10:36:18.916002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.643 [2024-07-15 10:36:18.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.916978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.916993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.917007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.917037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.917053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.917067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.917083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.917098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.917113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.917127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.917142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.917156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.917170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf860 is same with the state(5) to be set 00:19:30.644 [2024-07-15 10:36:18.918399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.918421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.918442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.918457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.918473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.918487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.644 [2024-07-15 10:36:18.918502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.644 [2024-07-15 10:36:18.918516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.918980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.918996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.645 [2024-07-15 10:36:18.919794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.645 [2024-07-15 10:36:18.919815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.919832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.919846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.919862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.919875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.919891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.919905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.919920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.919934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.919950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.919963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.919978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.919992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.646 [2024-07-15 10:36:18.920319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.646 [2024-07-15 10:36:18.920333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0b10 is same with the state(5) to be set 00:19:30.646 [2024-07-15 10:36:18.921909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:30.646 [2024-07-15 10:36:18.921947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:30.646 [2024-07-15 10:36:18.921965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:30.646 [2024-07-15 10:36:18.921984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:30.646 [2024-07-15 10:36:18.922053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53830 (9): Bad file descriptor 00:19:30.646 [2024-07-15 10:36:18.922078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef600 (9): Bad file descriptor 00:19:30.646 [2024-07-15 10:36:18.922095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:30.646 [2024-07-15 10:36:18.922108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:30.646 [2024-07-15 10:36:18.922130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:30.646 [2024-07-15 10:36:18.922153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:30.646 [2024-07-15 10:36:18.922167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:30.646 [2024-07-15 10:36:18.922180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:30.646 [2024-07-15 10:36:18.922199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:30.646 [2024-07-15 10:36:18.922212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:30.646 [2024-07-15 10:36:18.922225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:30.646 [2024-07-15 10:36:18.922268] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.646 [2024-07-15 10:36:18.922294] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.646 [2024-07-15 10:36:18.922317] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.646 [2024-07-15 10:36:18.922338] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.646 [2024-07-15 10:36:18.922360] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.646 [2024-07-15 10:36:18.922378] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.646 task offset: 25216 on job bdev=Nvme2n1 fails 00:19:30.646 00:19:30.646 Latency(us) 00:19:30.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme1n1 ended in about 0.91 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme1n1 : 0.91 211.32 13.21 70.44 0.00 224597.71 16990.81 243891.01 00:19:30.646 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme2n1 ended in about 0.90 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme2n1 : 0.90 212.48 13.28 70.83 0.00 218698.90 20486.07 256318.58 00:19:30.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme3n1 ended in about 0.90 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme3n1 : 0.90 212.21 13.26 70.74 0.00 214404.74 27767.85 250104.79 00:19:30.646 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme4n1 ended in about 0.91 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme4n1 : 0.91 211.95 13.25 70.65 0.00 210112.66 19223.89 273406.48 00:19:30.646 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme5n1 ended in about 0.92 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme5n1 : 0.92 139.75 8.73 69.88 0.00 277600.08 20194.80 237677.23 00:19:30.646 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme6n1 ended in about 0.92 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme6n1 : 0.92 139.27 8.70 69.63 0.00 272856.81 20194.80 264085.81 00:19:30.646 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme7n1 ended in about 0.92 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme7n1 : 0.92 138.79 8.67 69.39 0.00 267989.40 24272.59 285834.05 00:19:30.646 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme8n1 ended in about 0.93 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme8n1 : 0.93 138.31 8.64 69.16 0.00 263148.03 18252.99 264085.81 00:19:30.646 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme9n1 ended in about 0.93 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme9n1 : 0.93 137.84 8.62 68.92 0.00 258102.11 20874.43 260978.92 00:19:30.646 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.646 Job: Nvme10n1 ended in about 0.91 seconds with error 00:19:30.646 Verification LBA range: start 0x0 length 0x400 00:19:30.646 Nvme10n1 : 0.91 141.06 8.82 70.53 0.00 245297.81 24855.13 284280.60 00:19:30.646 =================================================================================================================== 00:19:30.646 Total : 1682.99 105.19 700.17 0.00 241948.20 16990.81 285834.05 00:19:30.646 [2024-07-15 10:36:18.947971] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:30.646 [2024-07-15 10:36:18.948054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:30.646 [2024-07-15 10:36:18.948092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.646 [2024-07-15 10:36:18.948110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.646 [2024-07-15 10:36:18.948122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.646 [2024-07-15 10:36:18.948340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.646 [2024-07-15 10:36:18.948373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76280 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.948393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76280 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.948496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.948523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x855610 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.948539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855610 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.948662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.948687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17bb0 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.948703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17bb0 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.948776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.948808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17990 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.948826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17990 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.948842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.948855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.948872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.647 [2024-07-15 10:36:18.948899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.948913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.948927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:30.647 [2024-07-15 10:36:18.950336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.950359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.950475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.950503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1d350 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.950519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1d350 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.950544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76280 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.950567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x855610 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.950584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17bb0 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.950602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17990 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.950682] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.647 [2024-07-15 10:36:18.950706] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.647 [2024-07-15 10:36:18.950724] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.647 [2024-07-15 10:36:18.950741] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:30.647 [2024-07-15 10:36:18.951140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1d350 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.951169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.951183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.951196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:30.647 [2024-07-15 10:36:18.951213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.951226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.951239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:30.647 [2024-07-15 10:36:18.951255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.951268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.951281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:30.647 [2024-07-15 10:36:18.951296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.951309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.951321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:30.647 [2024-07-15 10:36:18.951408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:30.647 [2024-07-15 10:36:18.951432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:30.647 [2024-07-15 10:36:18.951448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:30.647 [2024-07-15 10:36:18.951463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:30.647 [2024-07-15 10:36:18.951483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.647 [2024-07-15 10:36:18.951500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.951512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.951556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.951571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.951584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:30.647 [2024-07-15 10:36:18.951612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.951626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.951647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.647 [2024-07-15 10:36:18.951740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.951766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75c60 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.951781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75c60 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.951873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.951898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7f450 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.951913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7f450 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.952007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.952033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1f240 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.952048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f240 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.952136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.952160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef600 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.952175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef600 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.952247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.647 [2024-07-15 10:36:18.952271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd53830 with addr=10.0.0.2, port=4420 00:19:30.647 [2024-07-15 10:36:18.952286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53830 is same with the state(5) to be set 00:19:30.647 [2024-07-15 10:36:18.952328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75c60 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.952353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7f450 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.952371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1f240 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.952388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef600 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.952405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53830 (9): Bad file descriptor 00:19:30.647 [2024-07-15 10:36:18.952446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.952469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.952483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:30.647 [2024-07-15 10:36:18.952500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.952514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.952527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:30.647 [2024-07-15 10:36:18.952542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.952555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:30.647 [2024-07-15 10:36:18.952567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:30.647 [2024-07-15 10:36:18.952581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:30.647 [2024-07-15 10:36:18.952594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:30.648 [2024-07-15 10:36:18.952606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:30.648 [2024-07-15 10:36:18.952621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.648 [2024-07-15 10:36:18.952633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.648 [2024-07-15 10:36:18.952645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.648 [2024-07-15 10:36:18.952681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.648 [2024-07-15 10:36:18.952698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.648 [2024-07-15 10:36:18.952709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.648 [2024-07-15 10:36:18.952720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.648 [2024-07-15 10:36:18.952731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.907 10:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:30.907 10:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1244721 00:19:32.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1244721) - No such process 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.286 rmmod nvme_tcp 00:19:32.286 rmmod nvme_fabrics 00:19:32.286 rmmod nvme_keyring 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.286 10:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.241 00:19:34.241 real 0m7.651s 00:19:34.241 user 0m19.180s 00:19:34.241 sys 0m1.402s 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.241 ************************************ 00:19:34.241 END TEST nvmf_shutdown_tc3 00:19:34.241 ************************************ 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:34.241 00:19:34.241 real 0m27.494s 00:19:34.241 user 1m16.643s 00:19:34.241 sys 0m6.307s 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.241 10:36:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:34.241 ************************************ 00:19:34.241 END TEST nvmf_shutdown 00:19:34.241 ************************************ 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:34.241 10:36:22 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.241 10:36:22 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.241 10:36:22 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:34.241 10:36:22 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.241 10:36:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.241 ************************************ 00:19:34.241 START TEST nvmf_multicontroller 00:19:34.241 ************************************ 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:34.241 * Looking for test storage... 00:19:34.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.241 10:36:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.242 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.242 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.242 10:36:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.242 10:36:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:36.772 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:36.772 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:36.772 Found net devices under 0000:09:00.0: cvl_0_0 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:36.772 Found net devices under 0000:09:00.1: cvl_0_1 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:19:36.772 00:19:36.772 --- 10.0.0.2 ping statistics --- 00:19:36.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.772 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:36.772 00:19:36.772 --- 10.0.0.1 ping statistics --- 00:19:36.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.772 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.772 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1247198 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1247198 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1247198 ']' 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.773 10:36:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 [2024-07-15 10:36:24.951856] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:36.773 [2024-07-15 10:36:24.951944] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.773 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.773 [2024-07-15 10:36:25.021366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.773 [2024-07-15 10:36:25.128715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.773 [2024-07-15 10:36:25.128766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.773 [2024-07-15 10:36:25.128793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.773 [2024-07-15 10:36:25.128811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.773 [2024-07-15 10:36:25.128821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.773 [2024-07-15 10:36:25.128946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.773 [2024-07-15 10:36:25.129010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.773 [2024-07-15 10:36:25.129013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 [2024-07-15 10:36:25.264907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 Malloc0 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.773 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 [2024-07-15 10:36:25.319884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.031 [2024-07-15 10:36:25.327726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.031 Malloc1 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1247349 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1247349 /var/tmp/bdevperf.sock 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1247349 ']' 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.031 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.289 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.289 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:37.289 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:37.289 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.289 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.546 NVMe0n1 00:19:37.546 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.547 1 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.547 request: 00:19:37.547 { 00:19:37.547 "name": "NVMe0", 00:19:37.547 "trtype": "tcp", 00:19:37.547 "traddr": "10.0.0.2", 00:19:37.547 "adrfam": "ipv4", 00:19:37.547 "trsvcid": "4420", 00:19:37.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.547 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:37.547 "hostaddr": "10.0.0.2", 00:19:37.547 "hostsvcid": "60000", 00:19:37.547 "prchk_reftag": false, 00:19:37.547 "prchk_guard": false, 00:19:37.547 "hdgst": false, 00:19:37.547 "ddgst": false, 00:19:37.547 "method": "bdev_nvme_attach_controller", 00:19:37.547 "req_id": 1 00:19:37.547 } 00:19:37.547 Got JSON-RPC error response 00:19:37.547 response: 00:19:37.547 { 00:19:37.547 "code": -114, 00:19:37.547 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:37.547 } 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.547 request: 00:19:37.547 { 00:19:37.547 "name": "NVMe0", 00:19:37.547 "trtype": "tcp", 00:19:37.547 "traddr": "10.0.0.2", 00:19:37.547 "adrfam": "ipv4", 00:19:37.547 "trsvcid": "4420", 00:19:37.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:37.547 "hostaddr": "10.0.0.2", 00:19:37.547 "hostsvcid": "60000", 00:19:37.547 "prchk_reftag": false, 00:19:37.547 "prchk_guard": false, 00:19:37.547 "hdgst": false, 00:19:37.547 "ddgst": false, 00:19:37.547 "method": "bdev_nvme_attach_controller", 00:19:37.547 "req_id": 1 00:19:37.547 } 00:19:37.547 Got JSON-RPC error response 00:19:37.547 response: 00:19:37.547 { 00:19:37.547 "code": -114, 00:19:37.547 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:37.547 } 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.547 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.547 request: 00:19:37.547 { 00:19:37.547 "name": "NVMe0", 00:19:37.547 "trtype": "tcp", 00:19:37.547 "traddr": "10.0.0.2", 00:19:37.548 "adrfam": "ipv4", 00:19:37.548 "trsvcid": "4420", 00:19:37.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.548 "hostaddr": "10.0.0.2", 00:19:37.548 "hostsvcid": "60000", 00:19:37.548 "prchk_reftag": false, 00:19:37.548 "prchk_guard": false, 00:19:37.548 "hdgst": false, 00:19:37.548 "ddgst": false, 00:19:37.548 "multipath": "disable", 00:19:37.548 "method": "bdev_nvme_attach_controller", 00:19:37.548 "req_id": 1 00:19:37.548 } 00:19:37.548 Got JSON-RPC error response 00:19:37.548 response: 00:19:37.548 { 00:19:37.548 "code": -114, 00:19:37.548 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:37.548 } 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.548 10:36:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.548 request: 00:19:37.548 { 00:19:37.548 "name": "NVMe0", 00:19:37.548 "trtype": "tcp", 00:19:37.548 "traddr": "10.0.0.2", 00:19:37.548 "adrfam": "ipv4", 00:19:37.548 "trsvcid": "4420", 00:19:37.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.548 "hostaddr": "10.0.0.2", 00:19:37.548 "hostsvcid": "60000", 00:19:37.548 "prchk_reftag": false, 00:19:37.548 "prchk_guard": false, 00:19:37.548 "hdgst": false, 00:19:37.548 "ddgst": false, 00:19:37.548 "multipath": "failover", 00:19:37.548 "method": "bdev_nvme_attach_controller", 00:19:37.548 "req_id": 1 00:19:37.548 } 00:19:37.548 Got JSON-RPC error response 00:19:37.548 response: 00:19:37.548 { 00:19:37.548 "code": -114, 00:19:37.548 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:37.548 } 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.548 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.548 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.805 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:37.805 10:36:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.176 0 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1247349 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1247349 ']' 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1247349 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1247349 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1247349' 00:19:39.176 killing process with pid 1247349 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1247349 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1247349 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:19:39.176 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:39.176 [2024-07-15 10:36:25.424609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:39.176 [2024-07-15 10:36:25.424698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247349 ] 00:19:39.176 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.176 [2024-07-15 10:36:25.485380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.176 [2024-07-15 10:36:25.595097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.176 [2024-07-15 10:36:26.186630] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4fc104ed-6462-4430-9ac8-3640f7bbaab2 already exists 00:19:39.176 [2024-07-15 10:36:26.186667] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4fc104ed-6462-4430-9ac8-3640f7bbaab2 alias for bdev NVMe1n1 00:19:39.176 [2024-07-15 10:36:26.186696] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:39.176 Running I/O for 1 seconds... 00:19:39.176 00:19:39.176 Latency(us) 00:19:39.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.176 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:39.176 NVMe0n1 : 1.01 18919.64 73.90 0.00 0.00 6753.97 2051.03 12039.21 00:19:39.176 =================================================================================================================== 00:19:39.176 Total : 18919.64 73.90 0.00 0.00 6753.97 2051.03 12039.21 00:19:39.176 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.176 00:19:39.176 Latency(us) 00:19:39.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.176 =================================================================================================================== 00:19:39.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.176 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.176 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.176 rmmod nvme_tcp 00:19:39.176 rmmod nvme_fabrics 00:19:39.176 rmmod nvme_keyring 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1247198 ']' 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1247198 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1247198 ']' 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1247198 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1247198 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1247198' 00:19:39.177 killing process with pid 1247198 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1247198 00:19:39.177 10:36:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1247198 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.743 10:36:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.645 10:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.645 00:19:41.645 real 0m7.472s 00:19:41.645 user 0m11.591s 00:19:41.645 sys 0m2.328s 00:19:41.645 10:36:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.645 10:36:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.645 ************************************ 00:19:41.645 END TEST nvmf_multicontroller 00:19:41.645 ************************************ 00:19:41.645 10:36:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:41.645 10:36:30 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:41.645 10:36:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:41.645 10:36:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.645 10:36:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.645 ************************************ 00:19:41.645 START TEST nvmf_aer 00:19:41.645 ************************************ 00:19:41.645 10:36:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:41.902 * Looking for test storage... 00:19:41.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.902 10:36:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.903 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.903 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.903 10:36:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.903 10:36:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:43.798 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:43.798 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:43.798 Found net devices under 0000:09:00.0: cvl_0_0 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:43.798 Found net devices under 0000:09:00.1: cvl_0_1 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.798 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:43.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:19:43.799 00:19:43.799 --- 10.0.0.2 ping statistics --- 00:19:43.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.799 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:19:43.799 00:19:43.799 --- 10.0.0.1 ping statistics --- 00:19:43.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.799 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1249554 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1249554 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1249554 ']' 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.799 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:43.799 [2024-07-15 10:36:32.345139] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:43.799 [2024-07-15 10:36:32.345227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.056 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.056 [2024-07-15 10:36:32.408635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.056 [2024-07-15 10:36:32.515581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.056 [2024-07-15 10:36:32.515629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.056 [2024-07-15 10:36:32.515657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.056 [2024-07-15 10:36:32.515669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.056 [2024-07-15 10:36:32.515678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.056 [2024-07-15 10:36:32.515761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.056 [2024-07-15 10:36:32.515903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.056 [2024-07-15 10:36:32.515930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.056 [2024-07-15 10:36:32.515932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 [2024-07-15 10:36:32.677850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 Malloc0 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 [2024-07-15 10:36:32.731175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.314 [ 00:19:44.314 { 00:19:44.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:44.314 "subtype": "Discovery", 00:19:44.314 "listen_addresses": [], 00:19:44.314 "allow_any_host": true, 00:19:44.314 "hosts": [] 00:19:44.314 }, 00:19:44.314 { 00:19:44.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.314 "subtype": "NVMe", 00:19:44.314 "listen_addresses": [ 00:19:44.314 { 00:19:44.314 "trtype": "TCP", 00:19:44.314 "adrfam": "IPv4", 00:19:44.314 "traddr": "10.0.0.2", 00:19:44.314 "trsvcid": "4420" 00:19:44.314 } 00:19:44.314 ], 00:19:44.314 "allow_any_host": true, 00:19:44.314 "hosts": [], 00:19:44.314 "serial_number": "SPDK00000000000001", 00:19:44.314 "model_number": "SPDK bdev Controller", 00:19:44.314 "max_namespaces": 2, 00:19:44.314 "min_cntlid": 1, 00:19:44.314 "max_cntlid": 65519, 00:19:44.314 "namespaces": [ 00:19:44.314 { 00:19:44.314 "nsid": 1, 00:19:44.314 "bdev_name": "Malloc0", 00:19:44.314 "name": "Malloc0", 00:19:44.314 "nguid": "61E5C9CC32DA4A89B3FDE292D37B7639", 00:19:44.314 "uuid": "61e5c9cc-32da-4a89-b3fd-e292d37b7639" 00:19:44.314 } 00:19:44.314 ] 00:19:44.314 } 00:19:44.314 ] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1249583 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:44.314 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:44.314 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:44.572 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.572 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:19:44.572 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:19:44.572 10:36:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.572 Malloc1 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.572 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 [ 00:19:44.830 { 00:19:44.830 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:44.830 "subtype": "Discovery", 00:19:44.830 "listen_addresses": [], 00:19:44.830 "allow_any_host": true, 00:19:44.830 "hosts": [] 00:19:44.830 }, 00:19:44.830 { 00:19:44.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.830 "subtype": "NVMe", 00:19:44.830 "listen_addresses": [ 00:19:44.830 { 00:19:44.830 "trtype": "TCP", 00:19:44.830 "adrfam": "IPv4", 00:19:44.830 "traddr": "10.0.0.2", 00:19:44.830 "trsvcid": "4420" 00:19:44.830 } 00:19:44.830 ], 00:19:44.830 "allow_any_host": true, 00:19:44.830 "hosts": [], 00:19:44.830 Asynchronous Event Request test 00:19:44.830 Attaching to 10.0.0.2 00:19:44.830 Attached to 10.0.0.2 00:19:44.830 Registering asynchronous event callbacks... 00:19:44.830 Starting namespace attribute notice tests for all controllers... 00:19:44.830 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:44.830 aer_cb - Changed Namespace 00:19:44.830 Cleaning up... 00:19:44.830 "serial_number": "SPDK00000000000001", 00:19:44.830 "model_number": "SPDK bdev Controller", 00:19:44.830 "max_namespaces": 2, 00:19:44.830 "min_cntlid": 1, 00:19:44.830 "max_cntlid": 65519, 00:19:44.830 "namespaces": [ 00:19:44.830 { 00:19:44.830 "nsid": 1, 00:19:44.830 "bdev_name": "Malloc0", 00:19:44.830 "name": "Malloc0", 00:19:44.830 "nguid": "61E5C9CC32DA4A89B3FDE292D37B7639", 00:19:44.830 "uuid": "61e5c9cc-32da-4a89-b3fd-e292d37b7639" 00:19:44.830 }, 00:19:44.830 { 00:19:44.830 "nsid": 2, 00:19:44.830 "bdev_name": "Malloc1", 00:19:44.830 "name": "Malloc1", 00:19:44.830 "nguid": "9EC32736EDAA4E0FB3916B23C15D8E8C", 00:19:44.830 "uuid": "9ec32736-edaa-4e0f-b391-6b23c15d8e8c" 00:19:44.830 } 00:19:44.830 ] 00:19:44.830 } 00:19:44.830 ] 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1249583 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.830 rmmod nvme_tcp 00:19:44.830 rmmod nvme_fabrics 00:19:44.830 rmmod nvme_keyring 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1249554 ']' 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1249554 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1249554 ']' 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1249554 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1249554 00:19:44.830 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:44.831 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:44.831 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1249554' 00:19:44.831 killing process with pid 1249554 00:19:44.831 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1249554 00:19:44.831 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1249554 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.090 10:36:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.622 10:36:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.622 00:19:47.622 real 0m5.435s 00:19:47.622 user 0m4.594s 00:19:47.622 sys 0m1.876s 00:19:47.622 10:36:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.622 10:36:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:47.622 ************************************ 00:19:47.622 END TEST nvmf_aer 00:19:47.622 ************************************ 00:19:47.622 10:36:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:47.622 10:36:35 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:47.622 10:36:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:47.622 10:36:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.622 10:36:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:47.622 ************************************ 00:19:47.622 START TEST nvmf_async_init 00:19:47.622 ************************************ 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:47.622 * Looking for test storage... 00:19:47.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8aae8fb316704f20a64fd22bda085686 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.622 10:36:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:49.523 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:49.523 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.523 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:49.524 Found net devices under 0000:09:00.0: cvl_0_0 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:49.524 Found net devices under 0000:09:00.1: cvl_0_1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:19:49.524 00:19:49.524 --- 10.0.0.2 ping statistics --- 00:19:49.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.524 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:49.524 00:19:49.524 --- 10.0.0.1 ping statistics --- 00:19:49.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.524 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1251640 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1251640 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1251640 ']' 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.524 10:36:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.524 [2024-07-15 10:36:37.904976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:49.524 [2024-07-15 10:36:37.905051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.524 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.524 [2024-07-15 10:36:37.966931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.524 [2024-07-15 10:36:38.071989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.524 [2024-07-15 10:36:38.072057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.524 [2024-07-15 10:36:38.072071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.524 [2024-07-15 10:36:38.072098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.524 [2024-07-15 10:36:38.072108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.524 [2024-07-15 10:36:38.072155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 [2024-07-15 10:36:38.216300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 null0 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8aae8fb316704f20a64fd22bda085686 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 [2024-07-15 10:36:38.256527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.781 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.037 nvme0n1 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.038 [ 00:19:50.038 { 00:19:50.038 "name": "nvme0n1", 00:19:50.038 "aliases": [ 00:19:50.038 "8aae8fb3-1670-4f20-a64f-d22bda085686" 00:19:50.038 ], 00:19:50.038 "product_name": "NVMe disk", 00:19:50.038 "block_size": 512, 00:19:50.038 "num_blocks": 2097152, 00:19:50.038 "uuid": "8aae8fb3-1670-4f20-a64f-d22bda085686", 00:19:50.038 "assigned_rate_limits": { 00:19:50.038 "rw_ios_per_sec": 0, 00:19:50.038 "rw_mbytes_per_sec": 0, 00:19:50.038 "r_mbytes_per_sec": 0, 00:19:50.038 "w_mbytes_per_sec": 0 00:19:50.038 }, 00:19:50.038 "claimed": false, 00:19:50.038 "zoned": false, 00:19:50.038 "supported_io_types": { 00:19:50.038 "read": true, 00:19:50.038 "write": true, 00:19:50.038 "unmap": false, 00:19:50.038 "flush": true, 00:19:50.038 "reset": true, 00:19:50.038 "nvme_admin": true, 00:19:50.038 "nvme_io": true, 00:19:50.038 "nvme_io_md": false, 00:19:50.038 "write_zeroes": true, 00:19:50.038 "zcopy": false, 00:19:50.038 "get_zone_info": false, 00:19:50.038 "zone_management": false, 00:19:50.038 "zone_append": false, 00:19:50.038 "compare": true, 00:19:50.038 "compare_and_write": true, 00:19:50.038 "abort": true, 00:19:50.038 "seek_hole": false, 00:19:50.038 "seek_data": false, 00:19:50.038 "copy": true, 00:19:50.038 "nvme_iov_md": false 00:19:50.038 }, 00:19:50.038 "memory_domains": [ 00:19:50.038 { 00:19:50.038 "dma_device_id": "system", 00:19:50.038 "dma_device_type": 1 00:19:50.038 } 00:19:50.038 ], 00:19:50.038 "driver_specific": { 00:19:50.038 "nvme": [ 00:19:50.038 { 00:19:50.038 "trid": { 00:19:50.038 "trtype": "TCP", 00:19:50.038 "adrfam": "IPv4", 00:19:50.038 "traddr": "10.0.0.2", 00:19:50.038 "trsvcid": "4420", 00:19:50.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:50.038 }, 00:19:50.038 "ctrlr_data": { 00:19:50.038 "cntlid": 1, 00:19:50.038 "vendor_id": "0x8086", 00:19:50.038 "model_number": "SPDK bdev Controller", 00:19:50.038 "serial_number": "00000000000000000000", 00:19:50.038 "firmware_revision": "24.09", 00:19:50.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:50.038 "oacs": { 00:19:50.038 "security": 0, 00:19:50.038 "format": 0, 00:19:50.038 "firmware": 0, 00:19:50.038 "ns_manage": 0 00:19:50.038 }, 00:19:50.038 "multi_ctrlr": true, 00:19:50.038 "ana_reporting": false 00:19:50.038 }, 00:19:50.038 "vs": { 00:19:50.038 "nvme_version": "1.3" 00:19:50.038 }, 00:19:50.038 "ns_data": { 00:19:50.038 "id": 1, 00:19:50.038 "can_share": true 00:19:50.038 } 00:19:50.038 } 00:19:50.038 ], 00:19:50.038 "mp_policy": "active_passive" 00:19:50.038 } 00:19:50.038 } 00:19:50.038 ] 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.038 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.038 [2024-07-15 10:36:38.505248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:50.038 [2024-07-15 10:36:38.505335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95c090 (9): Bad file descriptor 00:19:50.294 [2024-07-15 10:36:38.637932] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:50.294 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.294 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:50.294 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.294 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.294 [ 00:19:50.294 { 00:19:50.294 "name": "nvme0n1", 00:19:50.294 "aliases": [ 00:19:50.294 "8aae8fb3-1670-4f20-a64f-d22bda085686" 00:19:50.294 ], 00:19:50.294 "product_name": "NVMe disk", 00:19:50.294 "block_size": 512, 00:19:50.294 "num_blocks": 2097152, 00:19:50.294 "uuid": "8aae8fb3-1670-4f20-a64f-d22bda085686", 00:19:50.294 "assigned_rate_limits": { 00:19:50.294 "rw_ios_per_sec": 0, 00:19:50.294 "rw_mbytes_per_sec": 0, 00:19:50.294 "r_mbytes_per_sec": 0, 00:19:50.294 "w_mbytes_per_sec": 0 00:19:50.295 }, 00:19:50.295 "claimed": false, 00:19:50.295 "zoned": false, 00:19:50.295 "supported_io_types": { 00:19:50.295 "read": true, 00:19:50.295 "write": true, 00:19:50.295 "unmap": false, 00:19:50.295 "flush": true, 00:19:50.295 "reset": true, 00:19:50.295 "nvme_admin": true, 00:19:50.295 "nvme_io": true, 00:19:50.295 "nvme_io_md": false, 00:19:50.295 "write_zeroes": true, 00:19:50.295 "zcopy": false, 00:19:50.295 "get_zone_info": false, 00:19:50.295 "zone_management": false, 00:19:50.295 "zone_append": false, 00:19:50.295 "compare": true, 00:19:50.295 "compare_and_write": true, 00:19:50.295 "abort": true, 00:19:50.295 "seek_hole": false, 00:19:50.295 "seek_data": false, 00:19:50.295 "copy": true, 00:19:50.295 "nvme_iov_md": false 00:19:50.295 }, 00:19:50.295 "memory_domains": [ 00:19:50.295 { 00:19:50.295 "dma_device_id": "system", 00:19:50.295 "dma_device_type": 1 00:19:50.295 } 00:19:50.295 ], 00:19:50.295 "driver_specific": { 00:19:50.295 "nvme": [ 00:19:50.295 { 00:19:50.295 "trid": { 00:19:50.295 "trtype": "TCP", 00:19:50.295 "adrfam": "IPv4", 00:19:50.295 "traddr": "10.0.0.2", 00:19:50.295 "trsvcid": "4420", 00:19:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:50.295 }, 00:19:50.295 "ctrlr_data": { 00:19:50.295 "cntlid": 2, 00:19:50.295 "vendor_id": "0x8086", 00:19:50.295 "model_number": "SPDK bdev Controller", 00:19:50.295 "serial_number": "00000000000000000000", 00:19:50.295 "firmware_revision": "24.09", 00:19:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:50.295 "oacs": { 00:19:50.295 "security": 0, 00:19:50.295 "format": 0, 00:19:50.295 "firmware": 0, 00:19:50.295 "ns_manage": 0 00:19:50.295 }, 00:19:50.295 "multi_ctrlr": true, 00:19:50.295 "ana_reporting": false 00:19:50.295 }, 00:19:50.295 "vs": { 00:19:50.295 "nvme_version": "1.3" 00:19:50.295 }, 00:19:50.295 "ns_data": { 00:19:50.295 "id": 1, 00:19:50.295 "can_share": true 00:19:50.295 } 00:19:50.295 } 00:19:50.295 ], 00:19:50.295 "mp_policy": "active_passive" 00:19:50.295 } 00:19:50.295 } 00:19:50.295 ] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bncaptXAus 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bncaptXAus 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 [2024-07-15 10:36:38.685878] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.295 [2024-07-15 10:36:38.685991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bncaptXAus 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 [2024-07-15 10:36:38.693891] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bncaptXAus 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 [2024-07-15 10:36:38.701908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.295 [2024-07-15 10:36:38.701973] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.295 nvme0n1 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 [ 00:19:50.295 { 00:19:50.295 "name": "nvme0n1", 00:19:50.295 "aliases": [ 00:19:50.295 "8aae8fb3-1670-4f20-a64f-d22bda085686" 00:19:50.295 ], 00:19:50.295 "product_name": "NVMe disk", 00:19:50.295 "block_size": 512, 00:19:50.295 "num_blocks": 2097152, 00:19:50.295 "uuid": "8aae8fb3-1670-4f20-a64f-d22bda085686", 00:19:50.295 "assigned_rate_limits": { 00:19:50.295 "rw_ios_per_sec": 0, 00:19:50.295 "rw_mbytes_per_sec": 0, 00:19:50.295 "r_mbytes_per_sec": 0, 00:19:50.295 "w_mbytes_per_sec": 0 00:19:50.295 }, 00:19:50.295 "claimed": false, 00:19:50.295 "zoned": false, 00:19:50.295 "supported_io_types": { 00:19:50.295 "read": true, 00:19:50.295 "write": true, 00:19:50.295 "unmap": false, 00:19:50.295 "flush": true, 00:19:50.295 "reset": true, 00:19:50.295 "nvme_admin": true, 00:19:50.295 "nvme_io": true, 00:19:50.295 "nvme_io_md": false, 00:19:50.295 "write_zeroes": true, 00:19:50.295 "zcopy": false, 00:19:50.295 "get_zone_info": false, 00:19:50.295 "zone_management": false, 00:19:50.295 "zone_append": false, 00:19:50.295 "compare": true, 00:19:50.295 "compare_and_write": true, 00:19:50.295 "abort": true, 00:19:50.295 "seek_hole": false, 00:19:50.295 "seek_data": false, 00:19:50.295 "copy": true, 00:19:50.295 "nvme_iov_md": false 00:19:50.295 }, 00:19:50.295 "memory_domains": [ 00:19:50.295 { 00:19:50.295 "dma_device_id": "system", 00:19:50.295 "dma_device_type": 1 00:19:50.295 } 00:19:50.295 ], 00:19:50.295 "driver_specific": { 00:19:50.295 "nvme": [ 00:19:50.295 { 00:19:50.295 "trid": { 00:19:50.295 "trtype": "TCP", 00:19:50.295 "adrfam": "IPv4", 00:19:50.295 "traddr": "10.0.0.2", 00:19:50.295 "trsvcid": "4421", 00:19:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:50.295 }, 00:19:50.295 "ctrlr_data": { 00:19:50.295 "cntlid": 3, 00:19:50.295 "vendor_id": "0x8086", 00:19:50.295 "model_number": "SPDK bdev Controller", 00:19:50.295 "serial_number": "00000000000000000000", 00:19:50.295 "firmware_revision": "24.09", 00:19:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:50.295 "oacs": { 00:19:50.295 "security": 0, 00:19:50.295 "format": 0, 00:19:50.295 "firmware": 0, 00:19:50.295 "ns_manage": 0 00:19:50.295 }, 00:19:50.295 "multi_ctrlr": true, 00:19:50.295 "ana_reporting": false 00:19:50.295 }, 00:19:50.295 "vs": { 00:19:50.295 "nvme_version": "1.3" 00:19:50.295 }, 00:19:50.295 "ns_data": { 00:19:50.295 "id": 1, 00:19:50.295 "can_share": true 00:19:50.295 } 00:19:50.295 } 00:19:50.295 ], 00:19:50.295 "mp_policy": "active_passive" 00:19:50.295 } 00:19:50.295 } 00:19:50.295 ] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bncaptXAus 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.295 rmmod nvme_tcp 00:19:50.295 rmmod nvme_fabrics 00:19:50.295 rmmod nvme_keyring 00:19:50.295 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1251640 ']' 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1251640 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1251640 ']' 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1251640 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1251640 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1251640' 00:19:50.552 killing process with pid 1251640 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1251640 00:19:50.552 [2024-07-15 10:36:38.873902] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:50.552 [2024-07-15 10:36:38.873934] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:50.552 10:36:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1251640 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.809 10:36:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.711 10:36:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.711 00:19:52.712 real 0m5.527s 00:19:52.712 user 0m2.106s 00:19:52.712 sys 0m1.804s 00:19:52.712 10:36:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.712 10:36:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:52.712 ************************************ 00:19:52.712 END TEST nvmf_async_init 00:19:52.712 ************************************ 00:19:52.712 10:36:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:52.712 10:36:41 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:52.712 10:36:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:52.712 10:36:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.712 10:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.712 ************************************ 00:19:52.712 START TEST dma 00:19:52.712 ************************************ 00:19:52.712 10:36:41 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:52.712 * Looking for test storage... 00:19:52.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:52.970 10:36:41 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.970 10:36:41 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.970 10:36:41 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.970 10:36:41 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.970 10:36:41 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.970 10:36:41 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.970 10:36:41 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.970 10:36:41 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:19:52.970 10:36:41 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.970 10:36:41 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.970 10:36:41 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:52.970 10:36:41 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:19:52.970 00:19:52.970 real 0m0.070s 00:19:52.970 user 0m0.035s 00:19:52.970 sys 0m0.040s 00:19:52.970 10:36:41 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.970 10:36:41 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:19:52.970 ************************************ 00:19:52.970 END TEST dma 00:19:52.970 ************************************ 00:19:52.970 10:36:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:52.970 10:36:41 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:52.970 10:36:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:52.970 10:36:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.970 10:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.970 ************************************ 00:19:52.970 START TEST nvmf_identify 00:19:52.970 ************************************ 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:52.970 * Looking for test storage... 00:19:52.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.970 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.971 10:36:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:55.502 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:55.502 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:55.502 Found net devices under 0000:09:00.0: cvl_0_0 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:55.502 Found net devices under 0000:09:00.1: cvl_0_1 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.502 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:19:55.503 00:19:55.503 --- 10.0.0.2 ping statistics --- 00:19:55.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.503 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:19:55.503 00:19:55.503 --- 10.0.0.1 ping statistics --- 00:19:55.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.503 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1253766 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1253766 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1253766 ']' 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.503 10:36:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.503 [2024-07-15 10:36:43.718910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:55.503 [2024-07-15 10:36:43.719007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.503 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.503 [2024-07-15 10:36:43.782512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.503 [2024-07-15 10:36:43.884447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.503 [2024-07-15 10:36:43.884511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.503 [2024-07-15 10:36:43.884532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.503 [2024-07-15 10:36:43.884548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.503 [2024-07-15 10:36:43.884564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.503 [2024-07-15 10:36:43.884648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.503 [2024-07-15 10:36:43.884756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.503 [2024-07-15 10:36:43.884842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.503 [2024-07-15 10:36:43.884850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.503 [2024-07-15 10:36:44.018584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.503 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 Malloc0 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 [2024-07-15 10:36:44.099923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 [ 00:19:55.762 { 00:19:55.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:55.762 "subtype": "Discovery", 00:19:55.762 "listen_addresses": [ 00:19:55.762 { 00:19:55.762 "trtype": "TCP", 00:19:55.762 "adrfam": "IPv4", 00:19:55.762 "traddr": "10.0.0.2", 00:19:55.762 "trsvcid": "4420" 00:19:55.762 } 00:19:55.762 ], 00:19:55.762 "allow_any_host": true, 00:19:55.762 "hosts": [] 00:19:55.762 }, 00:19:55.762 { 00:19:55.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.762 "subtype": "NVMe", 00:19:55.762 "listen_addresses": [ 00:19:55.762 { 00:19:55.762 "trtype": "TCP", 00:19:55.762 "adrfam": "IPv4", 00:19:55.762 "traddr": "10.0.0.2", 00:19:55.762 "trsvcid": "4420" 00:19:55.762 } 00:19:55.762 ], 00:19:55.762 "allow_any_host": true, 00:19:55.762 "hosts": [], 00:19:55.762 "serial_number": "SPDK00000000000001", 00:19:55.762 "model_number": "SPDK bdev Controller", 00:19:55.762 "max_namespaces": 32, 00:19:55.762 "min_cntlid": 1, 00:19:55.762 "max_cntlid": 65519, 00:19:55.762 "namespaces": [ 00:19:55.762 { 00:19:55.762 "nsid": 1, 00:19:55.762 "bdev_name": "Malloc0", 00:19:55.762 "name": "Malloc0", 00:19:55.762 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:55.762 "eui64": "ABCDEF0123456789", 00:19:55.762 "uuid": "e30bc2b4-a35b-4993-91a3-e840148f6241" 00:19:55.762 } 00:19:55.762 ] 00:19:55.762 } 00:19:55.762 ] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.762 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:55.762 [2024-07-15 10:36:44.142535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:55.762 [2024-07-15 10:36:44.142579] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253789 ] 00:19:55.762 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.762 [2024-07-15 10:36:44.178276] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:55.763 [2024-07-15 10:36:44.178348] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:55.763 [2024-07-15 10:36:44.178359] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:55.763 [2024-07-15 10:36:44.178380] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:55.763 [2024-07-15 10:36:44.178392] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:55.763 [2024-07-15 10:36:44.181851] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:55.763 [2024-07-15 10:36:44.181927] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x961540 0 00:19:55.763 [2024-07-15 10:36:44.189816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:55.763 [2024-07-15 10:36:44.189838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:55.763 [2024-07-15 10:36:44.189847] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:55.763 [2024-07-15 10:36:44.189853] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:55.763 [2024-07-15 10:36:44.189911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.189926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.189934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.189953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:55.763 [2024-07-15 10:36:44.189980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.197815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.197833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.197841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.197849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.197871] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:55.763 [2024-07-15 10:36:44.197885] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:55.763 [2024-07-15 10:36:44.197895] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:55.763 [2024-07-15 10:36:44.197919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.197928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.197935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.197946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.197970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.198103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.198116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.198123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.198139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:55.763 [2024-07-15 10:36:44.198152] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:55.763 [2024-07-15 10:36:44.198164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.198189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.198215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.198299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.198313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.198320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.198336] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:55.763 [2024-07-15 10:36:44.198350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:55.763 [2024-07-15 10:36:44.198363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.198388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.198409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.198499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.198513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.198520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.198536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:55.763 [2024-07-15 10:36:44.198553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.198580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.198601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.198700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.198714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.198721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.198737] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:55.763 [2024-07-15 10:36:44.198746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:55.763 [2024-07-15 10:36:44.198759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:55.763 [2024-07-15 10:36:44.198870] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:55.763 [2024-07-15 10:36:44.198882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:55.763 [2024-07-15 10:36:44.198898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.198916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.198928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.198964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.199095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.199110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.199117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.199124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.199133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:55.763 [2024-07-15 10:36:44.199150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.199160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.199166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.199177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.199197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.199277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.763 [2024-07-15 10:36:44.199291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.763 [2024-07-15 10:36:44.199298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.199305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.763 [2024-07-15 10:36:44.199314] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:55.763 [2024-07-15 10:36:44.199323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:55.763 [2024-07-15 10:36:44.199337] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:55.763 [2024-07-15 10:36:44.199358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:55.763 [2024-07-15 10:36:44.199377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.199385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.763 [2024-07-15 10:36:44.199396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.763 [2024-07-15 10:36:44.199418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.763 [2024-07-15 10:36:44.199559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:55.763 [2024-07-15 10:36:44.199574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:55.763 [2024-07-15 10:36:44.199581] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:55.763 [2024-07-15 10:36:44.199587] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961540): datao=0, datal=4096, cccid=0 00:19:55.763 [2024-07-15 10:36:44.199596] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c13c0) on tqpair(0x961540): expected_datao=0, payload_size=4096 00:19:55.764 [2024-07-15 10:36:44.199604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199615] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199625] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.764 [2024-07-15 10:36:44.199653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.764 [2024-07-15 10:36:44.199660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.764 [2024-07-15 10:36:44.199680] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:55.764 [2024-07-15 10:36:44.199694] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:55.764 [2024-07-15 10:36:44.199703] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:55.764 [2024-07-15 10:36:44.199713] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:55.764 [2024-07-15 10:36:44.199722] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:55.764 [2024-07-15 10:36:44.199731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:55.764 [2024-07-15 10:36:44.199747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:55.764 [2024-07-15 10:36:44.199759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.199785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:55.764 [2024-07-15 10:36:44.199815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.764 [2024-07-15 10:36:44.199949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.764 [2024-07-15 10:36:44.199963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.764 [2024-07-15 10:36:44.199970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:55.764 [2024-07-15 10:36:44.199991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.199999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.200016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.764 [2024-07-15 10:36:44.200027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.200049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.764 [2024-07-15 10:36:44.200059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.200082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.764 [2024-07-15 10:36:44.200092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.200134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.764 [2024-07-15 10:36:44.200144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:55.764 [2024-07-15 10:36:44.200164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:55.764 [2024-07-15 10:36:44.200177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.200210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.764 [2024-07-15 10:36:44.200231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c13c0, cid 0, qid 0 00:19:55.764 [2024-07-15 10:36:44.200242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1540, cid 1, qid 0 00:19:55.764 [2024-07-15 10:36:44.200264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c16c0, cid 2, qid 0 00:19:55.764 [2024-07-15 10:36:44.200273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1840, cid 3, qid 0 00:19:55.764 [2024-07-15 10:36:44.200281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c19c0, cid 4, qid 0 00:19:55.764 [2024-07-15 10:36:44.200460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.764 [2024-07-15 10:36:44.200472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.764 [2024-07-15 10:36:44.200479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c19c0) on tqpair=0x961540 00:19:55.764 [2024-07-15 10:36:44.200497] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:55.764 [2024-07-15 10:36:44.200507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:55.764 [2024-07-15 10:36:44.200525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.200545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.764 [2024-07-15 10:36:44.200582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c19c0, cid 4, qid 0 00:19:55.764 [2024-07-15 10:36:44.200743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:55.764 [2024-07-15 10:36:44.200757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:55.764 [2024-07-15 10:36:44.200764] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200771] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961540): datao=0, datal=4096, cccid=4 00:19:55.764 [2024-07-15 10:36:44.200779] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c19c0) on tqpair(0x961540): expected_datao=0, payload_size=4096 00:19:55.764 [2024-07-15 10:36:44.200786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200811] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.200822] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.240912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.764 [2024-07-15 10:36:44.240930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.764 [2024-07-15 10:36:44.240938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.240945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c19c0) on tqpair=0x961540 00:19:55.764 [2024-07-15 10:36:44.240970] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:55.764 [2024-07-15 10:36:44.241015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.241038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.764 [2024-07-15 10:36:44.241051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.241074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.764 [2024-07-15 10:36:44.241103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c19c0, cid 4, qid 0 00:19:55.764 [2024-07-15 10:36:44.241116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1b40, cid 5, qid 0 00:19:55.764 [2024-07-15 10:36:44.241248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:55.764 [2024-07-15 10:36:44.241262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:55.764 [2024-07-15 10:36:44.241269] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241276] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961540): datao=0, datal=1024, cccid=4 00:19:55.764 [2024-07-15 10:36:44.241284] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c19c0) on tqpair(0x961540): expected_datao=0, payload_size=1024 00:19:55.764 [2024-07-15 10:36:44.241291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241301] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241309] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.764 [2024-07-15 10:36:44.241326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.764 [2024-07-15 10:36:44.241333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.241340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1b40) on tqpair=0x961540 00:19:55.764 [2024-07-15 10:36:44.284831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.764 [2024-07-15 10:36:44.284850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.764 [2024-07-15 10:36:44.284858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.284865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c19c0) on tqpair=0x961540 00:19:55.764 [2024-07-15 10:36:44.284884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.764 [2024-07-15 10:36:44.284894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961540) 00:19:55.764 [2024-07-15 10:36:44.284905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.765 [2024-07-15 10:36:44.284936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c19c0, cid 4, qid 0 00:19:55.765 [2024-07-15 10:36:44.285096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:55.765 [2024-07-15 10:36:44.285111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:55.765 [2024-07-15 10:36:44.285118] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285125] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961540): datao=0, datal=3072, cccid=4 00:19:55.765 [2024-07-15 10:36:44.285133] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c19c0) on tqpair(0x961540): expected_datao=0, payload_size=3072 00:19:55.765 [2024-07-15 10:36:44.285145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285156] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285164] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:55.765 [2024-07-15 10:36:44.285199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:55.765 [2024-07-15 10:36:44.285206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c19c0) on tqpair=0x961540 00:19:55.765 [2024-07-15 10:36:44.285228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961540) 00:19:55.765 [2024-07-15 10:36:44.285248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.765 [2024-07-15 10:36:44.285277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c19c0, cid 4, qid 0 00:19:55.765 [2024-07-15 10:36:44.285379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:55.765 [2024-07-15 10:36:44.285393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:55.765 [2024-07-15 10:36:44.285400] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285406] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961540): datao=0, datal=8, cccid=4 00:19:55.765 [2024-07-15 10:36:44.285414] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c19c0) on tqpair(0x961540): expected_datao=0, payload_size=8 00:19:55.765 [2024-07-15 10:36:44.285421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285431] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:55.765 [2024-07-15 10:36:44.285438] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.027 [2024-07-15 10:36:44.325911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.027 [2024-07-15 10:36:44.325932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.027 [2024-07-15 10:36:44.325941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.027 [2024-07-15 10:36:44.325948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c19c0) on tqpair=0x961540 00:19:56.027 ===================================================== 00:19:56.027 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:56.027 ===================================================== 00:19:56.027 Controller Capabilities/Features 00:19:56.027 ================================ 00:19:56.027 Vendor ID: 0000 00:19:56.027 Subsystem Vendor ID: 0000 00:19:56.027 Serial Number: .................... 00:19:56.027 Model Number: ........................................ 00:19:56.027 Firmware Version: 24.09 00:19:56.027 Recommended Arb Burst: 0 00:19:56.027 IEEE OUI Identifier: 00 00 00 00:19:56.027 Multi-path I/O 00:19:56.027 May have multiple subsystem ports: No 00:19:56.027 May have multiple controllers: No 00:19:56.027 Associated with SR-IOV VF: No 00:19:56.027 Max Data Transfer Size: 131072 00:19:56.027 Max Number of Namespaces: 0 00:19:56.027 Max Number of I/O Queues: 1024 00:19:56.027 NVMe Specification Version (VS): 1.3 00:19:56.027 NVMe Specification Version (Identify): 1.3 00:19:56.027 Maximum Queue Entries: 128 00:19:56.027 Contiguous Queues Required: Yes 00:19:56.027 Arbitration Mechanisms Supported 00:19:56.027 Weighted Round Robin: Not Supported 00:19:56.027 Vendor Specific: Not Supported 00:19:56.027 Reset Timeout: 15000 ms 00:19:56.027 Doorbell Stride: 4 bytes 00:19:56.027 NVM Subsystem Reset: Not Supported 00:19:56.027 Command Sets Supported 00:19:56.027 NVM Command Set: Supported 00:19:56.027 Boot Partition: Not Supported 00:19:56.027 Memory Page Size Minimum: 4096 bytes 00:19:56.027 Memory Page Size Maximum: 4096 bytes 00:19:56.027 Persistent Memory Region: Not Supported 00:19:56.027 Optional Asynchronous Events Supported 00:19:56.027 Namespace Attribute Notices: Not Supported 00:19:56.027 Firmware Activation Notices: Not Supported 00:19:56.027 ANA Change Notices: Not Supported 00:19:56.027 PLE Aggregate Log Change Notices: Not Supported 00:19:56.027 LBA Status Info Alert Notices: Not Supported 00:19:56.027 EGE Aggregate Log Change Notices: Not Supported 00:19:56.027 Normal NVM Subsystem Shutdown event: Not Supported 00:19:56.027 Zone Descriptor Change Notices: Not Supported 00:19:56.027 Discovery Log Change Notices: Supported 00:19:56.027 Controller Attributes 00:19:56.027 128-bit Host Identifier: Not Supported 00:19:56.027 Non-Operational Permissive Mode: Not Supported 00:19:56.027 NVM Sets: Not Supported 00:19:56.027 Read Recovery Levels: Not Supported 00:19:56.027 Endurance Groups: Not Supported 00:19:56.027 Predictable Latency Mode: Not Supported 00:19:56.027 Traffic Based Keep ALive: Not Supported 00:19:56.027 Namespace Granularity: Not Supported 00:19:56.027 SQ Associations: Not Supported 00:19:56.027 UUID List: Not Supported 00:19:56.027 Multi-Domain Subsystem: Not Supported 00:19:56.027 Fixed Capacity Management: Not Supported 00:19:56.027 Variable Capacity Management: Not Supported 00:19:56.027 Delete Endurance Group: Not Supported 00:19:56.027 Delete NVM Set: Not Supported 00:19:56.027 Extended LBA Formats Supported: Not Supported 00:19:56.027 Flexible Data Placement Supported: Not Supported 00:19:56.027 00:19:56.027 Controller Memory Buffer Support 00:19:56.027 ================================ 00:19:56.027 Supported: No 00:19:56.027 00:19:56.027 Persistent Memory Region Support 00:19:56.027 ================================ 00:19:56.027 Supported: No 00:19:56.027 00:19:56.027 Admin Command Set Attributes 00:19:56.027 ============================ 00:19:56.027 Security Send/Receive: Not Supported 00:19:56.027 Format NVM: Not Supported 00:19:56.027 Firmware Activate/Download: Not Supported 00:19:56.027 Namespace Management: Not Supported 00:19:56.027 Device Self-Test: Not Supported 00:19:56.027 Directives: Not Supported 00:19:56.027 NVMe-MI: Not Supported 00:19:56.027 Virtualization Management: Not Supported 00:19:56.027 Doorbell Buffer Config: Not Supported 00:19:56.027 Get LBA Status Capability: Not Supported 00:19:56.027 Command & Feature Lockdown Capability: Not Supported 00:19:56.027 Abort Command Limit: 1 00:19:56.027 Async Event Request Limit: 4 00:19:56.027 Number of Firmware Slots: N/A 00:19:56.027 Firmware Slot 1 Read-Only: N/A 00:19:56.027 Firmware Activation Without Reset: N/A 00:19:56.027 Multiple Update Detection Support: N/A 00:19:56.027 Firmware Update Granularity: No Information Provided 00:19:56.027 Per-Namespace SMART Log: No 00:19:56.027 Asymmetric Namespace Access Log Page: Not Supported 00:19:56.027 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:56.027 Command Effects Log Page: Not Supported 00:19:56.027 Get Log Page Extended Data: Supported 00:19:56.027 Telemetry Log Pages: Not Supported 00:19:56.027 Persistent Event Log Pages: Not Supported 00:19:56.027 Supported Log Pages Log Page: May Support 00:19:56.027 Commands Supported & Effects Log Page: Not Supported 00:19:56.027 Feature Identifiers & Effects Log Page:May Support 00:19:56.027 NVMe-MI Commands & Effects Log Page: May Support 00:19:56.027 Data Area 4 for Telemetry Log: Not Supported 00:19:56.027 Error Log Page Entries Supported: 128 00:19:56.027 Keep Alive: Not Supported 00:19:56.027 00:19:56.027 NVM Command Set Attributes 00:19:56.027 ========================== 00:19:56.027 Submission Queue Entry Size 00:19:56.027 Max: 1 00:19:56.027 Min: 1 00:19:56.027 Completion Queue Entry Size 00:19:56.027 Max: 1 00:19:56.027 Min: 1 00:19:56.027 Number of Namespaces: 0 00:19:56.027 Compare Command: Not Supported 00:19:56.027 Write Uncorrectable Command: Not Supported 00:19:56.027 Dataset Management Command: Not Supported 00:19:56.027 Write Zeroes Command: Not Supported 00:19:56.027 Set Features Save Field: Not Supported 00:19:56.027 Reservations: Not Supported 00:19:56.027 Timestamp: Not Supported 00:19:56.027 Copy: Not Supported 00:19:56.027 Volatile Write Cache: Not Present 00:19:56.027 Atomic Write Unit (Normal): 1 00:19:56.027 Atomic Write Unit (PFail): 1 00:19:56.027 Atomic Compare & Write Unit: 1 00:19:56.027 Fused Compare & Write: Supported 00:19:56.027 Scatter-Gather List 00:19:56.027 SGL Command Set: Supported 00:19:56.027 SGL Keyed: Supported 00:19:56.027 SGL Bit Bucket Descriptor: Not Supported 00:19:56.027 SGL Metadata Pointer: Not Supported 00:19:56.027 Oversized SGL: Not Supported 00:19:56.028 SGL Metadata Address: Not Supported 00:19:56.028 SGL Offset: Supported 00:19:56.028 Transport SGL Data Block: Not Supported 00:19:56.028 Replay Protected Memory Block: Not Supported 00:19:56.028 00:19:56.028 Firmware Slot Information 00:19:56.028 ========================= 00:19:56.028 Active slot: 0 00:19:56.028 00:19:56.028 00:19:56.028 Error Log 00:19:56.028 ========= 00:19:56.028 00:19:56.028 Active Namespaces 00:19:56.028 ================= 00:19:56.028 Discovery Log Page 00:19:56.028 ================== 00:19:56.028 Generation Counter: 2 00:19:56.028 Number of Records: 2 00:19:56.028 Record Format: 0 00:19:56.028 00:19:56.028 Discovery Log Entry 0 00:19:56.028 ---------------------- 00:19:56.028 Transport Type: 3 (TCP) 00:19:56.028 Address Family: 1 (IPv4) 00:19:56.028 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:56.028 Entry Flags: 00:19:56.028 Duplicate Returned Information: 1 00:19:56.028 Explicit Persistent Connection Support for Discovery: 1 00:19:56.028 Transport Requirements: 00:19:56.028 Secure Channel: Not Required 00:19:56.028 Port ID: 0 (0x0000) 00:19:56.028 Controller ID: 65535 (0xffff) 00:19:56.028 Admin Max SQ Size: 128 00:19:56.028 Transport Service Identifier: 4420 00:19:56.028 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:56.028 Transport Address: 10.0.0.2 00:19:56.028 Discovery Log Entry 1 00:19:56.028 ---------------------- 00:19:56.028 Transport Type: 3 (TCP) 00:19:56.028 Address Family: 1 (IPv4) 00:19:56.028 Subsystem Type: 2 (NVM Subsystem) 00:19:56.028 Entry Flags: 00:19:56.028 Duplicate Returned Information: 0 00:19:56.028 Explicit Persistent Connection Support for Discovery: 0 00:19:56.028 Transport Requirements: 00:19:56.028 Secure Channel: Not Required 00:19:56.028 Port ID: 0 (0x0000) 00:19:56.028 Controller ID: 65535 (0xffff) 00:19:56.028 Admin Max SQ Size: 128 00:19:56.028 Transport Service Identifier: 4420 00:19:56.028 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:56.028 Transport Address: 10.0.0.2 [2024-07-15 10:36:44.326070] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:56.028 [2024-07-15 10:36:44.326093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c13c0) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.326106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.028 [2024-07-15 10:36:44.326116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1540) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.326124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.028 [2024-07-15 10:36:44.326133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c16c0) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.326141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.028 [2024-07-15 10:36:44.326149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1840) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.326157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.028 [2024-07-15 10:36:44.326177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961540) 00:19:56.028 [2024-07-15 10:36:44.326222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.028 [2024-07-15 10:36:44.326249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1840, cid 3, qid 0 00:19:56.028 [2024-07-15 10:36:44.326395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.028 [2024-07-15 10:36:44.326408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.028 [2024-07-15 10:36:44.326415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1840) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.326435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961540) 00:19:56.028 [2024-07-15 10:36:44.326461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.028 [2024-07-15 10:36:44.326487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1840, cid 3, qid 0 00:19:56.028 [2024-07-15 10:36:44.326583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.028 [2024-07-15 10:36:44.326595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.028 [2024-07-15 10:36:44.326602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1840) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.326618] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:56.028 [2024-07-15 10:36:44.326627] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:56.028 [2024-07-15 10:36:44.326643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961540) 00:19:56.028 [2024-07-15 10:36:44.326670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.028 [2024-07-15 10:36:44.326690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1840, cid 3, qid 0 00:19:56.028 [2024-07-15 10:36:44.326768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.028 [2024-07-15 10:36:44.326780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.028 [2024-07-15 10:36:44.326787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.326794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1840) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.332825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.332839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.332861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961540) 00:19:56.028 [2024-07-15 10:36:44.332872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.028 [2024-07-15 10:36:44.332895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c1840, cid 3, qid 0 00:19:56.028 [2024-07-15 10:36:44.332995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.028 [2024-07-15 10:36:44.333008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.028 [2024-07-15 10:36:44.333015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.333022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c1840) on tqpair=0x961540 00:19:56.028 [2024-07-15 10:36:44.333035] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:56.028 00:19:56.028 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:56.028 [2024-07-15 10:36:44.365875] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:56.028 [2024-07-15 10:36:44.365923] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253795 ] 00:19:56.028 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.028 [2024-07-15 10:36:44.400019] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:56.028 [2024-07-15 10:36:44.400074] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:56.028 [2024-07-15 10:36:44.400084] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:56.028 [2024-07-15 10:36:44.400098] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:56.028 [2024-07-15 10:36:44.400133] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:56.028 [2024-07-15 10:36:44.400311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:56.028 [2024-07-15 10:36:44.400353] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cf4540 0 00:19:56.028 [2024-07-15 10:36:44.413842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:56.028 [2024-07-15 10:36:44.413861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:56.028 [2024-07-15 10:36:44.413869] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:56.028 [2024-07-15 10:36:44.413876] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:56.028 [2024-07-15 10:36:44.413915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.413926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.028 [2024-07-15 10:36:44.413933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.028 [2024-07-15 10:36:44.413946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:56.029 [2024-07-15 10:36:44.413972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.420820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.420838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.420846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.420853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.420872] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:56.029 [2024-07-15 10:36:44.420884] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:56.029 [2024-07-15 10:36:44.420893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:56.029 [2024-07-15 10:36:44.420910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.420919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.420925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.420936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.420963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.421082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.421098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.421105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.421127] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:56.029 [2024-07-15 10:36:44.421141] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:56.029 [2024-07-15 10:36:44.421153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.421178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.421200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.421291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.421304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.421311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.421327] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:56.029 [2024-07-15 10:36:44.421341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:56.029 [2024-07-15 10:36:44.421353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.421377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.421399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.421475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.421489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.421497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.421512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:56.029 [2024-07-15 10:36:44.421529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.421556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.421577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.421656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.421670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.421681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.421689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.421697] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:56.029 [2024-07-15 10:36:44.421705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:56.029 [2024-07-15 10:36:44.421719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:56.029 [2024-07-15 10:36:44.425810] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:56.029 [2024-07-15 10:36:44.425824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:56.029 [2024-07-15 10:36:44.425837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.425845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.425851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.425862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.425884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.426001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.426016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.426023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.426038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:56.029 [2024-07-15 10:36:44.426055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.426082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.426103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.426185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.426200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.426207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.426221] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:56.029 [2024-07-15 10:36:44.426230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:56.029 [2024-07-15 10:36:44.426243] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:56.029 [2024-07-15 10:36:44.426258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:56.029 [2024-07-15 10:36:44.426272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.029 [2024-07-15 10:36:44.426295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.029 [2024-07-15 10:36:44.426317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.029 [2024-07-15 10:36:44.426431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.029 [2024-07-15 10:36:44.426444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.029 [2024-07-15 10:36:44.426451] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426458] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=4096, cccid=0 00:19:56.029 [2024-07-15 10:36:44.426466] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d543c0) on tqpair(0x1cf4540): expected_datao=0, payload_size=4096 00:19:56.029 [2024-07-15 10:36:44.426473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426484] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426492] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.029 [2024-07-15 10:36:44.426514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.029 [2024-07-15 10:36:44.426521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.029 [2024-07-15 10:36:44.426528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.029 [2024-07-15 10:36:44.426539] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:56.029 [2024-07-15 10:36:44.426552] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:56.029 [2024-07-15 10:36:44.426560] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:56.029 [2024-07-15 10:36:44.426567] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:56.029 [2024-07-15 10:36:44.426574] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:56.030 [2024-07-15 10:36:44.426582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.426597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.426609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.426633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.030 [2024-07-15 10:36:44.426655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.030 [2024-07-15 10:36:44.426741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.030 [2024-07-15 10:36:44.426756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.030 [2024-07-15 10:36:44.426763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.030 [2024-07-15 10:36:44.426780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.426817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.030 [2024-07-15 10:36:44.426832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.426855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.030 [2024-07-15 10:36:44.426865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.426887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.030 [2024-07-15 10:36:44.426897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.426918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.030 [2024-07-15 10:36:44.426927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.426946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.426974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.426981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.426992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.030 [2024-07-15 10:36:44.427014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d543c0, cid 0, qid 0 00:19:56.030 [2024-07-15 10:36:44.427040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54540, cid 1, qid 0 00:19:56.030 [2024-07-15 10:36:44.427048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d546c0, cid 2, qid 0 00:19:56.030 [2024-07-15 10:36:44.427056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.030 [2024-07-15 10:36:44.427064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.030 [2024-07-15 10:36:44.427215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.030 [2024-07-15 10:36:44.427229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.030 [2024-07-15 10:36:44.427236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.030 [2024-07-15 10:36:44.427259] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:56.030 [2024-07-15 10:36:44.427269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.427348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.030 [2024-07-15 10:36:44.427370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.030 [2024-07-15 10:36:44.427497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.030 [2024-07-15 10:36:44.427510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.030 [2024-07-15 10:36:44.427517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.030 [2024-07-15 10:36:44.427589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.427642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.030 [2024-07-15 10:36:44.427663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.030 [2024-07-15 10:36:44.427762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.030 [2024-07-15 10:36:44.427775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.030 [2024-07-15 10:36:44.427782] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427788] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=4096, cccid=4 00:19:56.030 [2024-07-15 10:36:44.427796] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d549c0) on tqpair(0x1cf4540): expected_datao=0, payload_size=4096 00:19:56.030 [2024-07-15 10:36:44.427813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427831] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427840] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.030 [2024-07-15 10:36:44.427862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.030 [2024-07-15 10:36:44.427869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.030 [2024-07-15 10:36:44.427892] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:56.030 [2024-07-15 10:36:44.427914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.427946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.427954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.427965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.030 [2024-07-15 10:36:44.427987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.030 [2024-07-15 10:36:44.428092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.030 [2024-07-15 10:36:44.428106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.030 [2024-07-15 10:36:44.428113] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.428123] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=4096, cccid=4 00:19:56.030 [2024-07-15 10:36:44.428131] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d549c0) on tqpair(0x1cf4540): expected_datao=0, payload_size=4096 00:19:56.030 [2024-07-15 10:36:44.428138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.428156] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.428166] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.428202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.030 [2024-07-15 10:36:44.428216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.030 [2024-07-15 10:36:44.428223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.428230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.030 [2024-07-15 10:36:44.428251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.428270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:56.030 [2024-07-15 10:36:44.428285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.030 [2024-07-15 10:36:44.428293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.030 [2024-07-15 10:36:44.428303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.030 [2024-07-15 10:36:44.428325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.030 [2024-07-15 10:36:44.428430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.031 [2024-07-15 10:36:44.428443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.031 [2024-07-15 10:36:44.428450] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428456] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=4096, cccid=4 00:19:56.031 [2024-07-15 10:36:44.428464] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d549c0) on tqpair(0x1cf4540): expected_datao=0, payload_size=4096 00:19:56.031 [2024-07-15 10:36:44.428471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428481] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428490] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.031 [2024-07-15 10:36:44.428512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.031 [2024-07-15 10:36:44.428519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.031 [2024-07-15 10:36:44.428539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428612] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:56.031 [2024-07-15 10:36:44.428620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:56.031 [2024-07-15 10:36:44.428629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:56.031 [2024-07-15 10:36:44.428648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.428667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.428678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.428701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.031 [2024-07-15 10:36:44.428746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.031 [2024-07-15 10:36:44.428757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54b40, cid 5, qid 0 00:19:56.031 [2024-07-15 10:36:44.428920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.031 [2024-07-15 10:36:44.428935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.031 [2024-07-15 10:36:44.428942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.031 [2024-07-15 10:36:44.428959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.031 [2024-07-15 10:36:44.428969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.031 [2024-07-15 10:36:44.428976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.428982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54b40) on tqpair=0x1cf4540 00:19:56.031 [2024-07-15 10:36:44.428998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54b40, cid 5, qid 0 00:19:56.031 [2024-07-15 10:36:44.429123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.031 [2024-07-15 10:36:44.429135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.031 [2024-07-15 10:36:44.429143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54b40) on tqpair=0x1cf4540 00:19:56.031 [2024-07-15 10:36:44.429165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54b40, cid 5, qid 0 00:19:56.031 [2024-07-15 10:36:44.429282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.031 [2024-07-15 10:36:44.429294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.031 [2024-07-15 10:36:44.429305] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54b40) on tqpair=0x1cf4540 00:19:56.031 [2024-07-15 10:36:44.429328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54b40, cid 5, qid 0 00:19:56.031 [2024-07-15 10:36:44.429445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.031 [2024-07-15 10:36:44.429458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.031 [2024-07-15 10:36:44.429465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54b40) on tqpair=0x1cf4540 00:19:56.031 [2024-07-15 10:36:44.429495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.429595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cf4540) 00:19:56.031 [2024-07-15 10:36:44.429605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.031 [2024-07-15 10:36:44.429627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54b40, cid 5, qid 0 00:19:56.031 [2024-07-15 10:36:44.429637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d549c0, cid 4, qid 0 00:19:56.031 [2024-07-15 10:36:44.429660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54cc0, cid 6, qid 0 00:19:56.031 [2024-07-15 10:36:44.429667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54e40, cid 7, qid 0 00:19:56.031 [2024-07-15 10:36:44.433840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.031 [2024-07-15 10:36:44.433858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.031 [2024-07-15 10:36:44.433866] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.031 [2024-07-15 10:36:44.433872] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=8192, cccid=5 00:19:56.031 [2024-07-15 10:36:44.433880] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d54b40) on tqpair(0x1cf4540): expected_datao=0, payload_size=8192 00:19:56.031 [2024-07-15 10:36:44.433887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433897] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433908] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.032 [2024-07-15 10:36:44.433927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.032 [2024-07-15 10:36:44.433933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433939] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=512, cccid=4 00:19:56.032 [2024-07-15 10:36:44.433947] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d549c0) on tqpair(0x1cf4540): expected_datao=0, payload_size=512 00:19:56.032 [2024-07-15 10:36:44.433954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433970] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.433979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.032 [2024-07-15 10:36:44.433988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.032 [2024-07-15 10:36:44.433994] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434000] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=512, cccid=6 00:19:56.032 [2024-07-15 10:36:44.434008] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d54cc0) on tqpair(0x1cf4540): expected_datao=0, payload_size=512 00:19:56.032 [2024-07-15 10:36:44.434015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434024] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434032] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:56.032 [2024-07-15 10:36:44.434049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:56.032 [2024-07-15 10:36:44.434055] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434062] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf4540): datao=0, datal=4096, cccid=7 00:19:56.032 [2024-07-15 10:36:44.434069] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d54e40) on tqpair(0x1cf4540): expected_datao=0, payload_size=4096 00:19:56.032 [2024-07-15 10:36:44.434076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434085] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434093] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.032 [2024-07-15 10:36:44.434125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.032 [2024-07-15 10:36:44.434132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54b40) on tqpair=0x1cf4540 00:19:56.032 [2024-07-15 10:36:44.434156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.032 [2024-07-15 10:36:44.434167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.032 [2024-07-15 10:36:44.434174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d549c0) on tqpair=0x1cf4540 00:19:56.032 [2024-07-15 10:36:44.434194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.032 [2024-07-15 10:36:44.434205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.032 [2024-07-15 10:36:44.434211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54cc0) on tqpair=0x1cf4540 00:19:56.032 [2024-07-15 10:36:44.434227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.032 [2024-07-15 10:36:44.434237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.032 [2024-07-15 10:36:44.434246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.032 [2024-07-15 10:36:44.434253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54e40) on tqpair=0x1cf4540 00:19:56.032 ===================================================== 00:19:56.032 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.032 ===================================================== 00:19:56.032 Controller Capabilities/Features 00:19:56.032 ================================ 00:19:56.032 Vendor ID: 8086 00:19:56.032 Subsystem Vendor ID: 8086 00:19:56.032 Serial Number: SPDK00000000000001 00:19:56.032 Model Number: SPDK bdev Controller 00:19:56.032 Firmware Version: 24.09 00:19:56.032 Recommended Arb Burst: 6 00:19:56.032 IEEE OUI Identifier: e4 d2 5c 00:19:56.032 Multi-path I/O 00:19:56.032 May have multiple subsystem ports: Yes 00:19:56.032 May have multiple controllers: Yes 00:19:56.032 Associated with SR-IOV VF: No 00:19:56.032 Max Data Transfer Size: 131072 00:19:56.032 Max Number of Namespaces: 32 00:19:56.032 Max Number of I/O Queues: 127 00:19:56.032 NVMe Specification Version (VS): 1.3 00:19:56.032 NVMe Specification Version (Identify): 1.3 00:19:56.032 Maximum Queue Entries: 128 00:19:56.032 Contiguous Queues Required: Yes 00:19:56.032 Arbitration Mechanisms Supported 00:19:56.032 Weighted Round Robin: Not Supported 00:19:56.032 Vendor Specific: Not Supported 00:19:56.032 Reset Timeout: 15000 ms 00:19:56.032 Doorbell Stride: 4 bytes 00:19:56.032 NVM Subsystem Reset: Not Supported 00:19:56.032 Command Sets Supported 00:19:56.032 NVM Command Set: Supported 00:19:56.032 Boot Partition: Not Supported 00:19:56.032 Memory Page Size Minimum: 4096 bytes 00:19:56.032 Memory Page Size Maximum: 4096 bytes 00:19:56.032 Persistent Memory Region: Not Supported 00:19:56.032 Optional Asynchronous Events Supported 00:19:56.032 Namespace Attribute Notices: Supported 00:19:56.032 Firmware Activation Notices: Not Supported 00:19:56.032 ANA Change Notices: Not Supported 00:19:56.032 PLE Aggregate Log Change Notices: Not Supported 00:19:56.032 LBA Status Info Alert Notices: Not Supported 00:19:56.032 EGE Aggregate Log Change Notices: Not Supported 00:19:56.032 Normal NVM Subsystem Shutdown event: Not Supported 00:19:56.032 Zone Descriptor Change Notices: Not Supported 00:19:56.032 Discovery Log Change Notices: Not Supported 00:19:56.032 Controller Attributes 00:19:56.032 128-bit Host Identifier: Supported 00:19:56.032 Non-Operational Permissive Mode: Not Supported 00:19:56.032 NVM Sets: Not Supported 00:19:56.032 Read Recovery Levels: Not Supported 00:19:56.032 Endurance Groups: Not Supported 00:19:56.032 Predictable Latency Mode: Not Supported 00:19:56.032 Traffic Based Keep ALive: Not Supported 00:19:56.032 Namespace Granularity: Not Supported 00:19:56.032 SQ Associations: Not Supported 00:19:56.032 UUID List: Not Supported 00:19:56.032 Multi-Domain Subsystem: Not Supported 00:19:56.032 Fixed Capacity Management: Not Supported 00:19:56.032 Variable Capacity Management: Not Supported 00:19:56.032 Delete Endurance Group: Not Supported 00:19:56.032 Delete NVM Set: Not Supported 00:19:56.032 Extended LBA Formats Supported: Not Supported 00:19:56.032 Flexible Data Placement Supported: Not Supported 00:19:56.032 00:19:56.032 Controller Memory Buffer Support 00:19:56.032 ================================ 00:19:56.032 Supported: No 00:19:56.032 00:19:56.032 Persistent Memory Region Support 00:19:56.032 ================================ 00:19:56.032 Supported: No 00:19:56.032 00:19:56.032 Admin Command Set Attributes 00:19:56.032 ============================ 00:19:56.032 Security Send/Receive: Not Supported 00:19:56.032 Format NVM: Not Supported 00:19:56.032 Firmware Activate/Download: Not Supported 00:19:56.032 Namespace Management: Not Supported 00:19:56.032 Device Self-Test: Not Supported 00:19:56.032 Directives: Not Supported 00:19:56.032 NVMe-MI: Not Supported 00:19:56.032 Virtualization Management: Not Supported 00:19:56.032 Doorbell Buffer Config: Not Supported 00:19:56.032 Get LBA Status Capability: Not Supported 00:19:56.032 Command & Feature Lockdown Capability: Not Supported 00:19:56.032 Abort Command Limit: 4 00:19:56.032 Async Event Request Limit: 4 00:19:56.032 Number of Firmware Slots: N/A 00:19:56.032 Firmware Slot 1 Read-Only: N/A 00:19:56.032 Firmware Activation Without Reset: N/A 00:19:56.032 Multiple Update Detection Support: N/A 00:19:56.032 Firmware Update Granularity: No Information Provided 00:19:56.032 Per-Namespace SMART Log: No 00:19:56.032 Asymmetric Namespace Access Log Page: Not Supported 00:19:56.032 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:56.032 Command Effects Log Page: Supported 00:19:56.032 Get Log Page Extended Data: Supported 00:19:56.032 Telemetry Log Pages: Not Supported 00:19:56.032 Persistent Event Log Pages: Not Supported 00:19:56.032 Supported Log Pages Log Page: May Support 00:19:56.033 Commands Supported & Effects Log Page: Not Supported 00:19:56.033 Feature Identifiers & Effects Log Page:May Support 00:19:56.033 NVMe-MI Commands & Effects Log Page: May Support 00:19:56.033 Data Area 4 for Telemetry Log: Not Supported 00:19:56.033 Error Log Page Entries Supported: 128 00:19:56.033 Keep Alive: Supported 00:19:56.033 Keep Alive Granularity: 10000 ms 00:19:56.033 00:19:56.033 NVM Command Set Attributes 00:19:56.033 ========================== 00:19:56.033 Submission Queue Entry Size 00:19:56.033 Max: 64 00:19:56.033 Min: 64 00:19:56.033 Completion Queue Entry Size 00:19:56.033 Max: 16 00:19:56.033 Min: 16 00:19:56.033 Number of Namespaces: 32 00:19:56.033 Compare Command: Supported 00:19:56.033 Write Uncorrectable Command: Not Supported 00:19:56.033 Dataset Management Command: Supported 00:19:56.033 Write Zeroes Command: Supported 00:19:56.033 Set Features Save Field: Not Supported 00:19:56.033 Reservations: Supported 00:19:56.033 Timestamp: Not Supported 00:19:56.033 Copy: Supported 00:19:56.033 Volatile Write Cache: Present 00:19:56.033 Atomic Write Unit (Normal): 1 00:19:56.033 Atomic Write Unit (PFail): 1 00:19:56.033 Atomic Compare & Write Unit: 1 00:19:56.033 Fused Compare & Write: Supported 00:19:56.033 Scatter-Gather List 00:19:56.033 SGL Command Set: Supported 00:19:56.033 SGL Keyed: Supported 00:19:56.033 SGL Bit Bucket Descriptor: Not Supported 00:19:56.033 SGL Metadata Pointer: Not Supported 00:19:56.033 Oversized SGL: Not Supported 00:19:56.033 SGL Metadata Address: Not Supported 00:19:56.033 SGL Offset: Supported 00:19:56.033 Transport SGL Data Block: Not Supported 00:19:56.033 Replay Protected Memory Block: Not Supported 00:19:56.033 00:19:56.033 Firmware Slot Information 00:19:56.033 ========================= 00:19:56.033 Active slot: 1 00:19:56.033 Slot 1 Firmware Revision: 24.09 00:19:56.033 00:19:56.033 00:19:56.033 Commands Supported and Effects 00:19:56.033 ============================== 00:19:56.033 Admin Commands 00:19:56.033 -------------- 00:19:56.033 Get Log Page (02h): Supported 00:19:56.033 Identify (06h): Supported 00:19:56.033 Abort (08h): Supported 00:19:56.033 Set Features (09h): Supported 00:19:56.033 Get Features (0Ah): Supported 00:19:56.033 Asynchronous Event Request (0Ch): Supported 00:19:56.033 Keep Alive (18h): Supported 00:19:56.033 I/O Commands 00:19:56.033 ------------ 00:19:56.033 Flush (00h): Supported LBA-Change 00:19:56.033 Write (01h): Supported LBA-Change 00:19:56.033 Read (02h): Supported 00:19:56.033 Compare (05h): Supported 00:19:56.033 Write Zeroes (08h): Supported LBA-Change 00:19:56.033 Dataset Management (09h): Supported LBA-Change 00:19:56.033 Copy (19h): Supported LBA-Change 00:19:56.033 00:19:56.033 Error Log 00:19:56.033 ========= 00:19:56.033 00:19:56.033 Arbitration 00:19:56.033 =========== 00:19:56.033 Arbitration Burst: 1 00:19:56.033 00:19:56.033 Power Management 00:19:56.033 ================ 00:19:56.033 Number of Power States: 1 00:19:56.033 Current Power State: Power State #0 00:19:56.033 Power State #0: 00:19:56.033 Max Power: 0.00 W 00:19:56.033 Non-Operational State: Operational 00:19:56.033 Entry Latency: Not Reported 00:19:56.033 Exit Latency: Not Reported 00:19:56.033 Relative Read Throughput: 0 00:19:56.033 Relative Read Latency: 0 00:19:56.033 Relative Write Throughput: 0 00:19:56.033 Relative Write Latency: 0 00:19:56.033 Idle Power: Not Reported 00:19:56.033 Active Power: Not Reported 00:19:56.033 Non-Operational Permissive Mode: Not Supported 00:19:56.033 00:19:56.033 Health Information 00:19:56.033 ================== 00:19:56.033 Critical Warnings: 00:19:56.033 Available Spare Space: OK 00:19:56.033 Temperature: OK 00:19:56.033 Device Reliability: OK 00:19:56.033 Read Only: No 00:19:56.033 Volatile Memory Backup: OK 00:19:56.033 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:56.033 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:56.033 Available Spare: 0% 00:19:56.033 Available Spare Threshold: 0% 00:19:56.033 Life Percentage Used:[2024-07-15 10:36:44.434362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.434374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cf4540) 00:19:56.033 [2024-07-15 10:36:44.434384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.033 [2024-07-15 10:36:44.434407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54e40, cid 7, qid 0 00:19:56.033 [2024-07-15 10:36:44.434543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.033 [2024-07-15 10:36:44.434558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.033 [2024-07-15 10:36:44.434566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.434573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54e40) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.434628] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:56.033 [2024-07-15 10:36:44.434648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d543c0) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.434659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.033 [2024-07-15 10:36:44.434668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54540) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.434677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.033 [2024-07-15 10:36:44.434685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d546c0) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.434693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.033 [2024-07-15 10:36:44.434702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.434710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.033 [2024-07-15 10:36:44.434723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.434746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.434752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.033 [2024-07-15 10:36:44.434762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.033 [2024-07-15 10:36:44.434784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.033 [2024-07-15 10:36:44.434969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.033 [2024-07-15 10:36:44.434984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.033 [2024-07-15 10:36:44.434991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.434998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.435009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.435017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.435023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.033 [2024-07-15 10:36:44.435034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.033 [2024-07-15 10:36:44.435060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.033 [2024-07-15 10:36:44.435152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.033 [2024-07-15 10:36:44.435170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.033 [2024-07-15 10:36:44.435178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.435185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.033 [2024-07-15 10:36:44.435193] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:56.033 [2024-07-15 10:36:44.435200] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:56.033 [2024-07-15 10:36:44.435217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.435227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.435233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.033 [2024-07-15 10:36:44.435243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.033 [2024-07-15 10:36:44.435272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.033 [2024-07-15 10:36:44.435349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.033 [2024-07-15 10:36:44.435369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.033 [2024-07-15 10:36:44.435376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.033 [2024-07-15 10:36:44.435383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.435399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.435427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.435448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.435527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.435540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.435547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.435570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.435597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.435617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.435692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.435705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.435712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.435735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.435762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.435786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.435878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.435892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.435900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.435923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.435939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.435950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.435971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.436049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.436063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.436070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.436093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.436120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.436140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.436218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.436232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.436239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.436262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.436289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.436309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.436386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.436399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.436406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.436428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.436455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.436475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.436554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.436568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.436575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.436599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.436630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.436651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.436729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.436744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.436751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.436774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.436808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.436832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.436911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.436925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.436932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.436956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.436972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.436983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.437003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.437085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.437099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.437106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.437113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.437130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.437140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.437146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.437156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.437177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.437251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.437269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.437277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.437284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.034 [2024-07-15 10:36:44.437300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.437311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.034 [2024-07-15 10:36:44.437317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.034 [2024-07-15 10:36:44.437328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.034 [2024-07-15 10:36:44.437348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.034 [2024-07-15 10:36:44.437425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.034 [2024-07-15 10:36:44.437438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.034 [2024-07-15 10:36:44.437445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.035 [2024-07-15 10:36:44.437468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.035 [2024-07-15 10:36:44.437494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.035 [2024-07-15 10:36:44.437515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.035 [2024-07-15 10:36:44.437599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.035 [2024-07-15 10:36:44.437613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.035 [2024-07-15 10:36:44.437621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.035 [2024-07-15 10:36:44.437644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.035 [2024-07-15 10:36:44.437671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.035 [2024-07-15 10:36:44.437691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.035 [2024-07-15 10:36:44.437767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.035 [2024-07-15 10:36:44.437779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.035 [2024-07-15 10:36:44.437786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.437793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.035 [2024-07-15 10:36:44.441820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.441834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.441841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf4540) 00:19:56.035 [2024-07-15 10:36:44.441851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.035 [2024-07-15 10:36:44.441872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d54840, cid 3, qid 0 00:19:56.035 [2024-07-15 10:36:44.442056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:56.035 [2024-07-15 10:36:44.442071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:56.035 [2024-07-15 10:36:44.442082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:56.035 [2024-07-15 10:36:44.442090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d54840) on tqpair=0x1cf4540 00:19:56.035 [2024-07-15 10:36:44.442103] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:56.035 0% 00:19:56.035 Data Units Read: 0 00:19:56.035 Data Units Written: 0 00:19:56.035 Host Read Commands: 0 00:19:56.035 Host Write Commands: 0 00:19:56.035 Controller Busy Time: 0 minutes 00:19:56.035 Power Cycles: 0 00:19:56.035 Power On Hours: 0 hours 00:19:56.035 Unsafe Shutdowns: 0 00:19:56.035 Unrecoverable Media Errors: 0 00:19:56.035 Lifetime Error Log Entries: 0 00:19:56.035 Warning Temperature Time: 0 minutes 00:19:56.035 Critical Temperature Time: 0 minutes 00:19:56.035 00:19:56.035 Number of Queues 00:19:56.035 ================ 00:19:56.035 Number of I/O Submission Queues: 127 00:19:56.035 Number of I/O Completion Queues: 127 00:19:56.035 00:19:56.035 Active Namespaces 00:19:56.035 ================= 00:19:56.035 Namespace ID:1 00:19:56.035 Error Recovery Timeout: Unlimited 00:19:56.035 Command Set Identifier: NVM (00h) 00:19:56.035 Deallocate: Supported 00:19:56.035 Deallocated/Unwritten Error: Not Supported 00:19:56.035 Deallocated Read Value: Unknown 00:19:56.035 Deallocate in Write Zeroes: Not Supported 00:19:56.035 Deallocated Guard Field: 0xFFFF 00:19:56.035 Flush: Supported 00:19:56.035 Reservation: Supported 00:19:56.035 Namespace Sharing Capabilities: Multiple Controllers 00:19:56.035 Size (in LBAs): 131072 (0GiB) 00:19:56.035 Capacity (in LBAs): 131072 (0GiB) 00:19:56.035 Utilization (in LBAs): 131072 (0GiB) 00:19:56.035 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:56.035 EUI64: ABCDEF0123456789 00:19:56.035 UUID: e30bc2b4-a35b-4993-91a3-e840148f6241 00:19:56.035 Thin Provisioning: Not Supported 00:19:56.035 Per-NS Atomic Units: Yes 00:19:56.035 Atomic Boundary Size (Normal): 0 00:19:56.035 Atomic Boundary Size (PFail): 0 00:19:56.035 Atomic Boundary Offset: 0 00:19:56.035 Maximum Single Source Range Length: 65535 00:19:56.035 Maximum Copy Length: 65535 00:19:56.035 Maximum Source Range Count: 1 00:19:56.035 NGUID/EUI64 Never Reused: No 00:19:56.035 Namespace Write Protected: No 00:19:56.035 Number of LBA Formats: 1 00:19:56.035 Current LBA Format: LBA Format #00 00:19:56.035 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:56.035 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.035 rmmod nvme_tcp 00:19:56.035 rmmod nvme_fabrics 00:19:56.035 rmmod nvme_keyring 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1253766 ']' 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1253766 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1253766 ']' 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1253766 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253766 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253766' 00:19:56.035 killing process with pid 1253766 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1253766 00:19:56.035 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1253766 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.294 10:36:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.826 10:36:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:58.826 00:19:58.826 real 0m5.537s 00:19:58.827 user 0m4.352s 00:19:58.827 sys 0m1.972s 00:19:58.827 10:36:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.827 10:36:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.827 ************************************ 00:19:58.827 END TEST nvmf_identify 00:19:58.827 ************************************ 00:19:58.827 10:36:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:58.827 10:36:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:58.827 10:36:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:58.827 10:36:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.827 10:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:58.827 ************************************ 00:19:58.827 START TEST nvmf_perf 00:19:58.827 ************************************ 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:58.827 * Looking for test storage... 00:19:58.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.827 10:36:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:00.730 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:00.730 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.730 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:00.731 Found net devices under 0000:09:00.0: cvl_0_0 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:00.731 Found net devices under 0000:09:00.1: cvl_0_1 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:00.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:20:00.731 00:20:00.731 --- 10.0.0.2 ping statistics --- 00:20:00.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.731 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:20:00.731 00:20:00.731 --- 10.0.0.1 ping statistics --- 00:20:00.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.731 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1255843 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1255843 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1255843 ']' 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.731 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:01.033 [2024-07-15 10:36:49.321575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:01.033 [2024-07-15 10:36:49.321657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.033 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.033 [2024-07-15 10:36:49.381930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.033 [2024-07-15 10:36:49.487483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.033 [2024-07-15 10:36:49.487535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.033 [2024-07-15 10:36:49.487556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.033 [2024-07-15 10:36:49.487594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.033 [2024-07-15 10:36:49.487609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.033 [2024-07-15 10:36:49.487708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.034 [2024-07-15 10:36:49.487843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.034 [2024-07-15 10:36:49.487903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.034 [2024-07-15 10:36:49.487910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:01.317 10:36:49 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:04.594 10:36:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:04.594 10:36:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:04.594 10:36:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:20:04.594 10:36:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.854 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:04.854 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:20:04.854 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:04.854 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:04.854 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.112 [2024-07-15 10:36:53.446651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.112 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.370 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:05.370 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.627 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:05.627 10:36:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:05.884 10:36:54 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.142 [2024-07-15 10:36:54.438195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.142 10:36:54 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:06.400 10:36:54 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:20:06.400 10:36:54 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:06.400 10:36:54 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:06.400 10:36:54 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:07.771 Initializing NVMe Controllers 00:20:07.771 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:20:07.771 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:20:07.771 Initialization complete. Launching workers. 00:20:07.771 ======================================================== 00:20:07.771 Latency(us) 00:20:07.771 Device Information : IOPS MiB/s Average min max 00:20:07.771 PCIE (0000:0b:00.0) NSID 1 from core 0: 85349.24 333.40 374.46 38.59 4312.43 00:20:07.771 ======================================================== 00:20:07.771 Total : 85349.24 333.40 374.46 38.59 4312.43 00:20:07.771 00:20:07.771 10:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:07.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.144 Initializing NVMe Controllers 00:20:09.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:09.144 Initialization complete. Launching workers. 00:20:09.144 ======================================================== 00:20:09.144 Latency(us) 00:20:09.144 Device Information : IOPS MiB/s Average min max 00:20:09.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 131.93 0.52 7587.41 139.56 45783.17 00:20:09.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.96 0.28 14117.32 4964.13 47906.81 00:20:09.144 ======================================================== 00:20:09.144 Total : 203.90 0.80 9892.09 139.56 47906.81 00:20:09.144 00:20:09.144 10:36:57 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.144 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.517 Initializing NVMe Controllers 00:20:10.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.517 Initialization complete. Launching workers. 00:20:10.517 ======================================================== 00:20:10.517 Latency(us) 00:20:10.517 Device Information : IOPS MiB/s Average min max 00:20:10.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8414.99 32.87 3809.81 677.47 7587.85 00:20:10.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3845.00 15.02 8358.41 5957.93 15816.02 00:20:10.517 ======================================================== 00:20:10.517 Total : 12259.99 47.89 5236.35 677.47 15816.02 00:20:10.517 00:20:10.517 10:36:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:10.517 10:36:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:10.517 10:36:58 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.517 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.046 Initializing NVMe Controllers 00:20:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.046 Controller IO queue size 128, less than required. 00:20:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.046 Controller IO queue size 128, less than required. 00:20:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:13.046 Initialization complete. Launching workers. 00:20:13.046 ======================================================== 00:20:13.046 Latency(us) 00:20:13.046 Device Information : IOPS MiB/s Average min max 00:20:13.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1701.68 425.42 76619.45 50827.17 135621.12 00:20:13.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.55 146.39 224623.43 76142.25 341816.05 00:20:13.046 ======================================================== 00:20:13.046 Total : 2287.23 571.81 114509.50 50827.17 341816.05 00:20:13.046 00:20:13.046 10:37:01 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:13.046 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.046 No valid NVMe controllers or AIO or URING devices found 00:20:13.046 Initializing NVMe Controllers 00:20:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.046 Controller IO queue size 128, less than required. 00:20:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.046 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:13.046 Controller IO queue size 128, less than required. 00:20:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.046 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:13.046 WARNING: Some requested NVMe devices were skipped 00:20:13.046 10:37:01 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:13.046 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.566 Initializing NVMe Controllers 00:20:15.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.566 Controller IO queue size 128, less than required. 00:20:15.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.566 Controller IO queue size 128, less than required. 00:20:15.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:15.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:15.566 Initialization complete. Launching workers. 00:20:15.566 00:20:15.566 ==================== 00:20:15.567 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:15.567 TCP transport: 00:20:15.567 polls: 8953 00:20:15.567 idle_polls: 5585 00:20:15.567 sock_completions: 3368 00:20:15.567 nvme_completions: 6167 00:20:15.567 submitted_requests: 9204 00:20:15.567 queued_requests: 1 00:20:15.567 00:20:15.567 ==================== 00:20:15.567 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:15.567 TCP transport: 00:20:15.567 polls: 12107 00:20:15.567 idle_polls: 8856 00:20:15.567 sock_completions: 3251 00:20:15.567 nvme_completions: 6149 00:20:15.567 submitted_requests: 9216 00:20:15.567 queued_requests: 1 00:20:15.567 ======================================================== 00:20:15.567 Latency(us) 00:20:15.567 Device Information : IOPS MiB/s Average min max 00:20:15.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1540.62 385.15 84646.13 58542.55 126631.41 00:20:15.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1536.12 384.03 84448.44 41075.45 127833.94 00:20:15.567 ======================================================== 00:20:15.567 Total : 3076.74 769.18 84547.43 41075.45 127833.94 00:20:15.567 00:20:15.567 10:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:15.567 10:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.823 rmmod nvme_tcp 00:20:15.823 rmmod nvme_fabrics 00:20:15.823 rmmod nvme_keyring 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1255843 ']' 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1255843 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1255843 ']' 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1255843 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255843 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255843' 00:20:15.823 killing process with pid 1255843 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1255843 00:20:15.823 10:37:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1255843 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.719 10:37:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.624 10:37:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.624 00:20:19.624 real 0m21.020s 00:20:19.624 user 1m4.441s 00:20:19.624 sys 0m5.264s 00:20:19.624 10:37:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.624 10:37:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:19.624 ************************************ 00:20:19.624 END TEST nvmf_perf 00:20:19.624 ************************************ 00:20:19.624 10:37:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:19.624 10:37:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:19.624 10:37:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.624 10:37:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.624 10:37:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.624 ************************************ 00:20:19.624 START TEST nvmf_fio_host 00:20:19.624 ************************************ 00:20:19.624 10:37:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:19.624 * Looking for test storage... 00:20:19.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.624 10:37:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.526 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.526 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:21.785 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:21.785 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.785 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:21.786 Found net devices under 0000:09:00.0: cvl_0_0 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:21.786 Found net devices under 0000:09:00.1: cvl_0_1 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:21.786 00:20:21.786 --- 10.0.0.2 ping statistics --- 00:20:21.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.786 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:21.786 00:20:21.786 --- 10.0.0.1 ping statistics --- 00:20:21.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.786 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1259688 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1259688 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1259688 ']' 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.786 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.786 [2024-07-15 10:37:10.285575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:21.786 [2024-07-15 10:37:10.285652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.786 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.044 [2024-07-15 10:37:10.349547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.045 [2024-07-15 10:37:10.457040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.045 [2024-07-15 10:37:10.457102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.045 [2024-07-15 10:37:10.457116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.045 [2024-07-15 10:37:10.457126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.045 [2024-07-15 10:37:10.457135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.045 [2024-07-15 10:37:10.457267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.045 [2024-07-15 10:37:10.457327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.045 [2024-07-15 10:37:10.457403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.045 [2024-07-15 10:37:10.457406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.045 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.045 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:22.045 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:22.609 [2024-07-15 10:37:10.860394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.609 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:22.609 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.609 10:37:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.609 10:37:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:22.867 Malloc1 00:20:22.867 10:37:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.125 10:37:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.383 10:37:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.641 [2024-07-15 10:37:12.064313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.641 10:37:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:23.899 10:37:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:24.159 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:24.159 fio-3.35 00:20:24.159 Starting 1 thread 00:20:24.159 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.691 [2024-07-15 10:37:14.887559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061470 is same with the state(5) to be set 00:20:26.691 [2024-07-15 10:37:14.887623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061470 is same with the state(5) to be set 00:20:26.691 [2024-07-15 10:37:14.887638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061470 is same with the state(5) to be set 00:20:26.691 [2024-07-15 10:37:14.887651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061470 is same with the state(5) to be set 00:20:26.691 [2024-07-15 10:37:14.887663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061470 is same with the state(5) to be set 00:20:26.691 00:20:26.691 test: (groupid=0, jobs=1): err= 0: pid=1260045: Mon Jul 15 10:37:14 2024 00:20:26.691 read: IOPS=9168, BW=35.8MiB/s (37.6MB/s)(71.8MiB/2006msec) 00:20:26.691 slat (nsec): min=1918, max=157726, avg=2499.36, stdev=1809.32 00:20:26.691 clat (usec): min=2541, max=12731, avg=7669.05, stdev=591.24 00:20:26.691 lat (usec): min=2569, max=12734, avg=7671.55, stdev=591.14 00:20:26.691 clat percentiles (usec): 00:20:26.691 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7242], 00:20:26.691 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:20:26.691 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:20:26.691 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11731], 99.95th=[12256], 00:20:26.691 | 99.99th=[12649] 00:20:26.691 bw ( KiB/s): min=35768, max=37144, per=99.91%, avg=36642.00, stdev=602.90, samples=4 00:20:26.691 iops : min= 8942, max= 9286, avg=9160.50, stdev=150.73, samples=4 00:20:26.691 write: IOPS=9177, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec); 0 zone resets 00:20:26.691 slat (usec): min=2, max=127, avg= 2.67, stdev= 1.31 00:20:26.691 clat (usec): min=1354, max=11718, avg=6241.18, stdev=499.23 00:20:26.691 lat (usec): min=1363, max=11721, avg=6243.86, stdev=499.18 00:20:26.691 clat percentiles (usec): 00:20:26.691 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:20:26.691 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:20:26.691 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:20:26.691 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9765], 99.95th=[10814], 00:20:26.691 | 99.99th=[11600] 00:20:26.691 bw ( KiB/s): min=36536, max=37048, per=99.99%, avg=36710.00, stdev=233.36, samples=4 00:20:26.691 iops : min= 9134, max= 9262, avg=9177.50, stdev=58.34, samples=4 00:20:26.691 lat (msec) : 2=0.02%, 4=0.12%, 10=99.74%, 20=0.12% 00:20:26.691 cpu : usr=64.29%, sys=33.87%, ctx=100, majf=0, minf=39 00:20:26.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:26.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.691 issued rwts: total=18392,18411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.691 00:20:26.691 Run status group 0 (all jobs): 00:20:26.691 READ: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.8MiB (75.3MB), run=2006-2006msec 00:20:26.691 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:26.691 10:37:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:26.691 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:26.691 fio-3.35 00:20:26.691 Starting 1 thread 00:20:26.691 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.256 00:20:29.256 test: (groupid=0, jobs=1): err= 0: pid=1260494: Mon Jul 15 10:37:17 2024 00:20:29.256 read: IOPS=8537, BW=133MiB/s (140MB/s)(267MiB/2001msec) 00:20:29.256 slat (usec): min=2, max=125, avg= 3.78, stdev= 1.80 00:20:29.256 clat (usec): min=333, max=18744, avg=8445.45, stdev=1971.34 00:20:29.256 lat (usec): min=341, max=18747, avg=8449.23, stdev=1971.37 00:20:29.256 clat percentiles (usec): 00:20:29.256 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6783], 00:20:29.256 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8848], 00:20:29.256 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10945], 95.00th=[11731], 00:20:29.256 | 99.00th=[13960], 99.50th=[15270], 99.90th=[15795], 99.95th=[16188], 00:20:29.256 | 99.99th=[16319] 00:20:29.256 bw ( KiB/s): min=62304, max=77088, per=50.09%, avg=68426.67, stdev=7712.02, samples=3 00:20:29.256 iops : min= 3894, max= 4818, avg=4276.67, stdev=482.00, samples=3 00:20:29.256 write: IOPS=5048, BW=78.9MiB/s (82.7MB/s)(146MiB/1847msec); 0 zone resets 00:20:29.256 slat (usec): min=30, max=136, avg=33.79, stdev= 4.96 00:20:29.256 clat (usec): min=5433, max=20263, avg=11238.09, stdev=1937.57 00:20:29.256 lat (usec): min=5467, max=20296, avg=11271.88, stdev=1937.45 00:20:29.256 clat percentiles (usec): 00:20:29.256 | 1.00th=[ 7570], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9634], 00:20:29.256 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:20:29.256 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13960], 95.00th=[14746], 00:20:29.256 | 99.00th=[16057], 99.50th=[16450], 99.90th=[19792], 99.95th=[20055], 00:20:29.256 | 99.99th=[20317] 00:20:29.256 bw ( KiB/s): min=64448, max=80288, per=88.41%, avg=71413.33, stdev=8090.77, samples=3 00:20:29.256 iops : min= 4028, max= 5018, avg=4463.33, stdev=505.67, samples=3 00:20:29.256 lat (usec) : 500=0.01% 00:20:29.256 lat (msec) : 4=0.24%, 10=61.26%, 20=38.48%, 50=0.02% 00:20:29.256 cpu : usr=78.66%, sys=20.09%, ctx=38, majf=0, minf=59 00:20:29.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:29.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.256 issued rwts: total=17084,9324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.256 00:20:29.256 Run status group 0 (all jobs): 00:20:29.256 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2001-2001msec 00:20:29.257 WRITE: bw=78.9MiB/s (82.7MB/s), 78.9MiB/s-78.9MiB/s (82.7MB/s-82.7MB/s), io=146MiB (153MB), run=1847-1847msec 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.257 rmmod nvme_tcp 00:20:29.257 rmmod nvme_fabrics 00:20:29.257 rmmod nvme_keyring 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1259688 ']' 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1259688 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1259688 ']' 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1259688 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259688 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259688' 00:20:29.257 killing process with pid 1259688 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1259688 00:20:29.257 10:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1259688 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.515 10:37:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.054 10:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:32.054 00:20:32.054 real 0m12.078s 00:20:32.054 user 0m35.826s 00:20:32.054 sys 0m3.881s 00:20:32.054 10:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.054 10:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.054 ************************************ 00:20:32.054 END TEST nvmf_fio_host 00:20:32.054 ************************************ 00:20:32.054 10:37:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:32.054 10:37:20 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:32.054 10:37:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:32.054 10:37:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.054 10:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:32.054 ************************************ 00:20:32.054 START TEST nvmf_failover 00:20:32.054 ************************************ 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:32.054 * Looking for test storage... 00:20:32.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:32.054 10:37:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.958 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:33.959 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:33.959 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:33.959 Found net devices under 0000:09:00.0: cvl_0_0 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:33.959 Found net devices under 0000:09:00.1: cvl_0_1 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:20:33.959 00:20:33.959 --- 10.0.0.2 ping statistics --- 00:20:33.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.959 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:20:33.959 00:20:33.959 --- 10.0.0.1 ping statistics --- 00:20:33.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.959 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1262698 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1262698 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1262698 ']' 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.959 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:33.959 [2024-07-15 10:37:22.440502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:33.959 [2024-07-15 10:37:22.440589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.959 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.959 [2024-07-15 10:37:22.503015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:34.217 [2024-07-15 10:37:22.610211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.217 [2024-07-15 10:37:22.610257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.217 [2024-07-15 10:37:22.610286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.217 [2024-07-15 10:37:22.610297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.217 [2024-07-15 10:37:22.610307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.217 [2024-07-15 10:37:22.610386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.217 [2024-07-15 10:37:22.610452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.217 [2024-07-15 10:37:22.610455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.217 10:37:22 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:34.475 [2024-07-15 10:37:23.018905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.732 10:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:34.988 Malloc0 00:20:34.988 10:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.244 10:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.501 10:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.809 [2024-07-15 10:37:24.115614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.809 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.065 [2024-07-15 10:37:24.364328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.065 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:36.065 [2024-07-15 10:37:24.609164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1262984 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1262984 /var/tmp/bdevperf.sock 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1262984 ']' 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.323 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:36.582 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.582 10:37:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:36.582 10:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:36.839 NVMe0n1 00:20:36.839 10:37:25 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.403 00:20:37.403 10:37:25 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1263118 00:20:37.403 10:37:25 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.403 10:37:25 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:38.338 10:37:26 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.598 [2024-07-15 10:37:26.926014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 [2024-07-15 10:37:26.926189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeba070 is same with the state(5) to be set 00:20:38.598 10:37:26 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:41.884 10:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.884 00:20:41.884 10:37:30 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:42.143 10:37:30 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:45.430 10:37:33 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.430 [2024-07-15 10:37:33.830558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.430 10:37:33 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:46.365 10:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:46.625 [2024-07-15 10:37:35.087769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.625 [2024-07-15 10:37:35.087990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 [2024-07-15 10:37:35.088451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe70 is same with the state(5) to be set 00:20:46.626 10:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1263118 00:20:53.208 0 00:20:53.208 10:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1262984 00:20:53.208 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1262984 ']' 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1262984 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1262984 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1262984' 00:20:53.209 killing process with pid 1262984 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1262984 00:20:53.209 10:37:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1262984 00:20:53.209 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:53.209 [2024-07-15 10:37:24.670344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:53.209 [2024-07-15 10:37:24.670437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262984 ] 00:20:53.209 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.209 [2024-07-15 10:37:24.730735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.209 [2024-07-15 10:37:24.838996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.209 Running I/O for 15 seconds... 00:20:53.209 [2024-07-15 10:37:26.926678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.926727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.926767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.926794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.926844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.926883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.926913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.926936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.926964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.926988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.927952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.927980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.209 [2024-07-15 10:37:26.928621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.209 [2024-07-15 10:37:26.928645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.928674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.928697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.928724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.928747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.928775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.928822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.928852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.928875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.928902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.928953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.928978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.929972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.929996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.210 [2024-07-15 10:37:26.930909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.210 [2024-07-15 10:37:26.930932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.930958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.930981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.211 [2024-07-15 10:37:26.931645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.931696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.931747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.931795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.931871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.931920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.931950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.931981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.932952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.932976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.933004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.933027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.933054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.933077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.933104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.933148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.211 [2024-07-15 10:37:26.933179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.211 [2024-07-15 10:37:26.933203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:26.933252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:26.933305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:26.933354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:26.933403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.212 [2024-07-15 10:37:26.933468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.212 [2024-07-15 10:37:26.933487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:20:53.212 [2024-07-15 10:37:26.933510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933590] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x253f390 was disconnected and freed. reset controller. 00:20:53.212 [2024-07-15 10:37:26.933617] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:53.212 [2024-07-15 10:37:26.933679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.212 [2024-07-15 10:37:26.933706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.212 [2024-07-15 10:37:26.933761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.212 [2024-07-15 10:37:26.933820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.212 [2024-07-15 10:37:26.933877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:26.933899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.212 [2024-07-15 10:37:26.933960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25190f0 (9): Bad file descriptor 00:20:53.212 [2024-07-15 10:37:26.938228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.212 [2024-07-15 10:37:26.974009] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:53.212 [2024-07-15 10:37:30.566211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.566341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.566409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.566950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.566976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.212 [2024-07-15 10:37:30.567576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.567643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.567694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.567752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.567808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.212 [2024-07-15 10:37:30.567878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.212 [2024-07-15 10:37:30.567906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.213 [2024-07-15 10:37:30.567930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.567957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.213 [2024-07-15 10:37:30.567981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.213 [2024-07-15 10:37:30.568033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.568968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.568992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.213 [2024-07-15 10:37:30.569309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.213 [2024-07-15 10:37:30.569678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.213 [2024-07-15 10:37:30.569702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.569733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.569756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.569782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.569826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.569856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.569880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.569906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.569931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.569957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.569981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.570948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.570973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.214 [2024-07-15 10:37:30.571747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.214 [2024-07-15 10:37:30.571949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.214 [2024-07-15 10:37:30.571973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:30.572969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.572994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e3d80 is same with the state(5) to be set 00:20:53.215 [2024-07-15 10:37:30.573024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.215 [2024-07-15 10:37:30.573049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.215 [2024-07-15 10:37:30.573069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:20:53.215 [2024-07-15 10:37:30.573090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.573196] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26e3d80 was disconnected and freed. reset controller. 00:20:53.215 [2024-07-15 10:37:30.573223] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:53.215 [2024-07-15 10:37:30.573287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.215 [2024-07-15 10:37:30.573315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.573339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.215 [2024-07-15 10:37:30.573365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.573388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.215 [2024-07-15 10:37:30.573412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.573435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.215 [2024-07-15 10:37:30.573457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:30.573481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.215 [2024-07-15 10:37:30.573553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25190f0 (9): Bad file descriptor 00:20:53.215 [2024-07-15 10:37:30.577773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.215 [2024-07-15 10:37:30.744489] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:53.215 [2024-07-15 10:37:35.090517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.215 [2024-07-15 10:37:35.090569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.090957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.090981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.091009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.091033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.091060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.091084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.091124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.091148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.091172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.091197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.091222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.215 [2024-07-15 10:37:35.091245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.215 [2024-07-15 10:37:35.091272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.091959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.091982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.092963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.092988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.216 [2024-07-15 10:37:35.093325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.216 [2024-07-15 10:37:35.093348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.093950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.093973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.094964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.094990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.095014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.095040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.095063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.095089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.095129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.095153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.095178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.217 [2024-07-15 10:37:35.095202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.217 [2024-07-15 10:37:35.095227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.218 [2024-07-15 10:37:35.095670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.095736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45080 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.095757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.218 [2024-07-15 10:37:35.095873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.218 [2024-07-15 10:37:35.095922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.218 [2024-07-15 10:37:35.095969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.095992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.218 [2024-07-15 10:37:35.096015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25190f0 is same with the state(5) to be set 00:20:53.218 [2024-07-15 10:37:35.096325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45088 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45096 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45104 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45112 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45120 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45128 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45136 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.096917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.096947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.096969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.096988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45144 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.097009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.097032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.097050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.097070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45152 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.097091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.097118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.097153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.097170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45160 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.097193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.097215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.097233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45168 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.097272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.097294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.097313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.097331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45176 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.097354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.097375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.097393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.097413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:20:53.218 [2024-07-15 10:37:35.097433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.218 [2024-07-15 10:37:35.097455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.218 [2024-07-15 10:37:35.097474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.218 [2024-07-15 10:37:35.097490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.097513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.097535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.097554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.097574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44184 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.097594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.097623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.097643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.097662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44192 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.097685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.097706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.097727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.097745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44200 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.097770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.097818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.097839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.097860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44208 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.097881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.097906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.097925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.097944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44216 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.097968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.097989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44224 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44232 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44240 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44248 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44256 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44264 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44272 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44280 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44288 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44296 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44176 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.098930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.098949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.098969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44304 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.098991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44312 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44320 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44328 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44336 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44344 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44352 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44360 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.219 [2024-07-15 10:37:35.099636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.219 [2024-07-15 10:37:35.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44368 len:8 PRP1 0x0 PRP2 0x0 00:20:53.219 [2024-07-15 10:37:35.099677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.219 [2024-07-15 10:37:35.099699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.099718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.099738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44376 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.099758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.099807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.099830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.099850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44384 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.099872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.099895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.099913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.099934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44392 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.099955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.099980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.099999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44400 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44408 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44416 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44424 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44432 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44440 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44448 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44456 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44464 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44472 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44480 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.100934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.100953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.100975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.100997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.101034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.101056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.101078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.101134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.101156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.101178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.101216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.101235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.101257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.101293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.101315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.101336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.101373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44528 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.101393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.101415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.220 [2024-07-15 10:37:35.101450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44536 len:8 PRP1 0x0 PRP2 0x0 00:20:53.220 [2024-07-15 10:37:35.101472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.220 [2024-07-15 10:37:35.101494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.220 [2024-07-15 10:37:35.101513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.101532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44544 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.101552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.101573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.101592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.101609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44552 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.101631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.101652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.101669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.101689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44560 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.101709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.101742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.101761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.101780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44568 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.101822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.101848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.101869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.101887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44576 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.101908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.101932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.101951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.101970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44584 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.101992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.102014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.102033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44592 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44600 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44608 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44616 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44624 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44632 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44640 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44648 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.108920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44656 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.108941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.108962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.108983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44664 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.109023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.109046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.109064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44672 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.109119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.109140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.109172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44680 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.109209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.109231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.109249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44688 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.109293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.109315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.109333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44696 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.109371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.109391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.109409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 PRP1 0x0 PRP2 0x0 00:20:53.221 [2024-07-15 10:37:35.109446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.221 [2024-07-15 10:37:35.109468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.221 [2024-07-15 10:37:35.109485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.221 [2024-07-15 10:37:35.109502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44712 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.109524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.109544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.109562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.109579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44720 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.109599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.109620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.109638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.109655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44728 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.109676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.109697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.109715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.109733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44736 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.109752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.109774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.109814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.109834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44744 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.109870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.109892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.109921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.109941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44752 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.109963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.109986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44760 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44768 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.110922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.110942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.110963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.110985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.111006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.111023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.111045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.111068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.111100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.111119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.111139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.111160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.111178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.111195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.111221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.111243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.111261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.111278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.111297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.111320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.222 [2024-07-15 10:37:35.111338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.222 [2024-07-15 10:37:35.111355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:20:53.222 [2024-07-15 10:37:35.111376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.222 [2024-07-15 10:37:35.111397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.111915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.111934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.111957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.111986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44976 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45000 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45008 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45016 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45024 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45032 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.112930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.112954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.112973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.112994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45040 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.113015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.223 [2024-07-15 10:37:35.113038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.223 [2024-07-15 10:37:35.113057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.223 [2024-07-15 10:37:35.113076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45048 len:8 PRP1 0x0 PRP2 0x0 00:20:53.223 [2024-07-15 10:37:35.113100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.224 [2024-07-15 10:37:35.113122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.224 [2024-07-15 10:37:35.113155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.224 [2024-07-15 10:37:35.113174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45056 len:8 PRP1 0x0 PRP2 0x0 00:20:53.224 [2024-07-15 10:37:35.113213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.224 [2024-07-15 10:37:35.113237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.224 [2024-07-15 10:37:35.113254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.224 [2024-07-15 10:37:35.113272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45064 len:8 PRP1 0x0 PRP2 0x0 00:20:53.224 [2024-07-15 10:37:35.113292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.224 [2024-07-15 10:37:35.113312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.224 [2024-07-15 10:37:35.113332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.224 [2024-07-15 10:37:35.113349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45072 len:8 PRP1 0x0 PRP2 0x0 00:20:53.224 [2024-07-15 10:37:35.113370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.224 [2024-07-15 10:37:35.113399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.224 [2024-07-15 10:37:35.113419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.224 [2024-07-15 10:37:35.113438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45080 len:8 PRP1 0x0 PRP2 0x0 00:20:53.224 [2024-07-15 10:37:35.113457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.224 [2024-07-15 10:37:35.113538] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26e3b70 was disconnected and freed. reset controller. 00:20:53.224 [2024-07-15 10:37:35.113564] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:53.224 [2024-07-15 10:37:35.113589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.224 [2024-07-15 10:37:35.113653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25190f0 (9): Bad file descriptor 00:20:53.224 [2024-07-15 10:37:35.117737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.224 [2024-07-15 10:37:35.192071] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:53.224 00:20:53.224 Latency(us) 00:20:53.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:53.224 Verification LBA range: start 0x0 length 0x4000 00:20:53.224 NVMe0n1 : 15.01 8573.97 33.49 709.30 0.00 13759.90 534.00 32234.00 00:20:53.224 =================================================================================================================== 00:20:53.224 Total : 8573.97 33.49 709.30 0.00 13759.90 534.00 32234.00 00:20:53.224 Received shutdown signal, test time was about 15.000000 seconds 00:20:53.224 00:20:53.224 Latency(us) 00:20:53.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.224 =================================================================================================================== 00:20:53.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1264925 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1264925 /var/tmp/bdevperf.sock 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1264925 ']' 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:53.224 [2024-07-15 10:37:41.690529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:53.224 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:53.482 [2024-07-15 10:37:41.939257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:53.482 10:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.050 NVMe0n1 00:20:54.050 10:37:42 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.309 00:20:54.309 10:37:42 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.568 00:20:54.568 10:37:43 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.568 10:37:43 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:54.826 10:37:43 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.086 10:37:43 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:58.427 10:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:58.427 10:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:58.427 10:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1265592 00:20:58.427 10:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:58.427 10:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1265592 00:20:59.800 0 00:20:59.800 10:37:47 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:59.800 [2024-07-15 10:37:41.196614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:59.800 [2024-07-15 10:37:41.196703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264925 ] 00:20:59.800 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.800 [2024-07-15 10:37:41.258455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.800 [2024-07-15 10:37:41.364205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.800 [2024-07-15 10:37:43.546341] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:59.800 [2024-07-15 10:37:43.546414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.800 [2024-07-15 10:37:43.546445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.800 [2024-07-15 10:37:43.546469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.800 [2024-07-15 10:37:43.546506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.800 [2024-07-15 10:37:43.546529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.800 [2024-07-15 10:37:43.546551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.800 [2024-07-15 10:37:43.546575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.800 [2024-07-15 10:37:43.546597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.800 [2024-07-15 10:37:43.546620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:59.800 [2024-07-15 10:37:43.546678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:59.800 [2024-07-15 10:37:43.546720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4a0f0 (9): Bad file descriptor 00:20:59.800 [2024-07-15 10:37:43.600018] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:59.800 Running I/O for 1 seconds... 00:20:59.800 00:20:59.800 Latency(us) 00:20:59.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.800 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.800 Verification LBA range: start 0x0 length 0x4000 00:20:59.800 NVMe0n1 : 1.01 8937.31 34.91 0.00 0.00 14258.67 3228.25 11602.30 00:20:59.800 =================================================================================================================== 00:20:59.800 Total : 8937.31 34.91 0.00 0.00 14258.67 3228.25 11602.30 00:20:59.800 10:37:47 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.800 10:37:47 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:59.800 10:37:48 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.058 10:37:48 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.058 10:37:48 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:00.315 10:37:48 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.573 10:37:48 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:03.854 10:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:03.854 10:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1264925 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1264925 ']' 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1264925 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264925 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264925' 00:21:03.854 killing process with pid 1264925 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1264925 00:21:03.854 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1264925 00:21:04.111 10:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:04.111 10:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.369 rmmod nvme_tcp 00:21:04.369 rmmod nvme_fabrics 00:21:04.369 rmmod nvme_keyring 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1262698 ']' 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1262698 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1262698 ']' 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1262698 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1262698 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.369 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.370 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1262698' 00:21:04.370 killing process with pid 1262698 00:21:04.370 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1262698 00:21:04.370 10:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1262698 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.938 10:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.843 10:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.843 00:21:06.843 real 0m35.102s 00:21:06.843 user 2m2.764s 00:21:06.843 sys 0m6.189s 00:21:06.843 10:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.843 10:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:06.843 ************************************ 00:21:06.843 END TEST nvmf_failover 00:21:06.843 ************************************ 00:21:06.843 10:37:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:06.843 10:37:55 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:06.843 10:37:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.843 10:37:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.843 10:37:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.843 ************************************ 00:21:06.843 START TEST nvmf_host_discovery 00:21:06.843 ************************************ 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:06.843 * Looking for test storage... 00:21:06.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.843 10:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:08.747 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:08.747 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.747 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:09.006 Found net devices under 0000:09:00.0: cvl_0_0 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:09.006 Found net devices under 0000:09:00.1: cvl_0_1 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:09.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:21:09.006 00:21:09.006 --- 10.0.0.2 ping statistics --- 00:21:09.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.006 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:09.006 00:21:09.006 --- 10.0.0.1 ping statistics --- 00:21:09.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.006 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.006 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1268234 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1268234 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1268234 ']' 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.007 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.007 [2024-07-15 10:37:57.499528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:09.007 [2024-07-15 10:37:57.499615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.007 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.266 [2024-07-15 10:37:57.562074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.266 [2024-07-15 10:37:57.664172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.266 [2024-07-15 10:37:57.664225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.266 [2024-07-15 10:37:57.664252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.266 [2024-07-15 10:37:57.664263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.266 [2024-07-15 10:37:57.664272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.266 [2024-07-15 10:37:57.664296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.266 [2024-07-15 10:37:57.802445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.266 [2024-07-15 10:37:57.810626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.266 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.525 null0 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.525 null1 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1268258 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1268258 /tmp/host.sock 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1268258 ']' 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:09.525 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.525 10:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.525 [2024-07-15 10:37:57.881480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:09.525 [2024-07-15 10:37:57.881560] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268258 ] 00:21:09.525 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.525 [2024-07-15 10:37:57.937347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.525 [2024-07-15 10:37:58.041808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.784 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:09.785 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.043 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:10.043 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:10.043 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.043 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 [2024-07-15 10:37:58.436271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:10.044 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.302 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:10.302 10:37:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:10.868 [2024-07-15 10:37:59.210523] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:10.868 [2024-07-15 10:37:59.210546] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:10.868 [2024-07-15 10:37:59.210567] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:10.868 [2024-07-15 10:37:59.338016] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:11.126 [2024-07-15 10:37:59.441389] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:11.126 [2024-07-15 10:37:59.441411] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:11.126 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:11.386 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.647 10:37:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.647 [2024-07-15 10:38:00.004928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.647 [2024-07-15 10:38:00.005553] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:11.648 [2024-07-15 10:38:00.005594] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:11.648 10:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:11.648 [2024-07-15 10:38:00.132450] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:11.907 [2024-07-15 10:38:00.235063] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:11.907 [2024-07-15 10:38:00.235086] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:11.907 [2024-07-15 10:38:00.235095] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.846 [2024-07-15 10:38:01.209353] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:12.846 [2024-07-15 10:38:01.209407] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:12.846 [2024-07-15 10:38:01.218377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.846 [2024-07-15 10:38:01.218411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.846 [2024-07-15 10:38:01.218450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.846 [2024-07-15 10:38:01.218465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.846 [2024-07-15 10:38:01.218480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.846 [2024-07-15 10:38:01.218493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.846 [2024-07-15 10:38:01.218508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.846 [2024-07-15 10:38:01.218521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.846 [2024-07-15 10:38:01.218535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.846 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.846 [2024-07-15 10:38:01.228368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.846 [2024-07-15 10:38:01.238411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.846 [2024-07-15 10:38:01.238691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.846 [2024-07-15 10:38:01.238722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e5c00 with addr=10.0.0.2, port=4420 00:21:12.846 [2024-07-15 10:38:01.238741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.846 [2024-07-15 10:38:01.238765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.846 [2024-07-15 10:38:01.238788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.846 [2024-07-15 10:38:01.238811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.846 [2024-07-15 10:38:01.238831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.846 [2024-07-15 10:38:01.238860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.846 [2024-07-15 10:38:01.248504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.846 [2024-07-15 10:38:01.248676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.846 [2024-07-15 10:38:01.248703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e5c00 with addr=10.0.0.2, port=4420 00:21:12.847 [2024-07-15 10:38:01.248720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.847 [2024-07-15 10:38:01.248743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.847 [2024-07-15 10:38:01.248763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.847 [2024-07-15 10:38:01.248776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.847 [2024-07-15 10:38:01.248790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.847 [2024-07-15 10:38:01.248819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:12.847 [2024-07-15 10:38:01.258588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.847 [2024-07-15 10:38:01.258856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.847 [2024-07-15 10:38:01.258886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e5c00 with addr=10.0.0.2, port=4420 00:21:12.847 [2024-07-15 10:38:01.258904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.847 [2024-07-15 10:38:01.258926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.847 [2024-07-15 10:38:01.259716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.847 [2024-07-15 10:38:01.259738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.847 [2024-07-15 10:38:01.259752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.847 [2024-07-15 10:38:01.259799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.847 [2024-07-15 10:38:01.268659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.847 [2024-07-15 10:38:01.268873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.847 [2024-07-15 10:38:01.268901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e5c00 with addr=10.0.0.2, port=4420 00:21:12.847 [2024-07-15 10:38:01.268919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.847 [2024-07-15 10:38:01.268942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.847 [2024-07-15 10:38:01.268978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.847 [2024-07-15 10:38:01.268997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.847 [2024-07-15 10:38:01.269011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.847 [2024-07-15 10:38:01.269031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.847 [2024-07-15 10:38:01.278744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.847 [2024-07-15 10:38:01.278938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.847 [2024-07-15 10:38:01.278966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e5c00 with addr=10.0.0.2, port=4420 00:21:12.847 [2024-07-15 10:38:01.278983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.847 [2024-07-15 10:38:01.279011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.847 [2024-07-15 10:38:01.279061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.847 [2024-07-15 10:38:01.279082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.847 [2024-07-15 10:38:01.279095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.847 [2024-07-15 10:38:01.279114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.847 [2024-07-15 10:38:01.288827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.847 [2024-07-15 10:38:01.288995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.847 [2024-07-15 10:38:01.289023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e5c00 with addr=10.0.0.2, port=4420 00:21:12.847 [2024-07-15 10:38:01.289040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e5c00 is same with the state(5) to be set 00:21:12.847 [2024-07-15 10:38:01.289062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e5c00 (9): Bad file descriptor 00:21:12.847 [2024-07-15 10:38:01.289097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.847 [2024-07-15 10:38:01.289116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.847 [2024-07-15 10:38:01.289130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.847 [2024-07-15 10:38:01.289149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.847 [2024-07-15 10:38:01.295111] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:12.847 [2024-07-15 10:38:01.295155] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.847 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.106 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.107 10:38:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.489 [2024-07-15 10:38:02.598426] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:14.489 [2024-07-15 10:38:02.598464] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:14.490 [2024-07-15 10:38:02.598488] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:14.490 [2024-07-15 10:38:02.725887] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:14.490 [2024-07-15 10:38:02.793834] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:14.490 [2024-07-15 10:38:02.793879] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 request: 00:21:14.490 { 00:21:14.490 "name": "nvme", 00:21:14.490 "trtype": "tcp", 00:21:14.490 "traddr": "10.0.0.2", 00:21:14.490 "adrfam": "ipv4", 00:21:14.490 "trsvcid": "8009", 00:21:14.490 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:14.490 "wait_for_attach": true, 00:21:14.490 "method": "bdev_nvme_start_discovery", 00:21:14.490 "req_id": 1 00:21:14.490 } 00:21:14.490 Got JSON-RPC error response 00:21:14.490 response: 00:21:14.490 { 00:21:14.490 "code": -17, 00:21:14.490 "message": "File exists" 00:21:14.490 } 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 request: 00:21:14.490 { 00:21:14.490 "name": "nvme_second", 00:21:14.490 "trtype": "tcp", 00:21:14.490 "traddr": "10.0.0.2", 00:21:14.490 "adrfam": "ipv4", 00:21:14.490 "trsvcid": "8009", 00:21:14.490 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:14.490 "wait_for_attach": true, 00:21:14.490 "method": "bdev_nvme_start_discovery", 00:21:14.490 "req_id": 1 00:21:14.490 } 00:21:14.490 Got JSON-RPC error response 00:21:14.490 response: 00:21:14.490 { 00:21:14.490 "code": -17, 00:21:14.490 "message": "File exists" 00:21:14.490 } 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.490 10:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.873 [2024-07-15 10:38:04.001695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.873 [2024-07-15 10:38:04.001741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1800c90 with addr=10.0.0.2, port=8010 00:21:15.873 [2024-07-15 10:38:04.001777] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:15.873 [2024-07-15 10:38:04.001799] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:15.873 [2024-07-15 10:38:04.001831] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:16.811 [2024-07-15 10:38:05.004180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.811 [2024-07-15 10:38:05.004217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3540 with addr=10.0.0.2, port=8010 00:21:16.811 [2024-07-15 10:38:05.004249] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:16.811 [2024-07-15 10:38:05.004269] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:16.811 [2024-07-15 10:38:05.004289] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:17.750 [2024-07-15 10:38:06.006331] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:17.750 request: 00:21:17.750 { 00:21:17.750 "name": "nvme_second", 00:21:17.750 "trtype": "tcp", 00:21:17.750 "traddr": "10.0.0.2", 00:21:17.750 "adrfam": "ipv4", 00:21:17.750 "trsvcid": "8010", 00:21:17.750 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:17.750 "wait_for_attach": false, 00:21:17.750 "attach_timeout_ms": 3000, 00:21:17.750 "method": "bdev_nvme_start_discovery", 00:21:17.750 "req_id": 1 00:21:17.750 } 00:21:17.750 Got JSON-RPC error response 00:21:17.750 response: 00:21:17.750 { 00:21:17.750 "code": -110, 00:21:17.750 "message": "Connection timed out" 00:21:17.750 } 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1268258 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.750 rmmod nvme_tcp 00:21:17.750 rmmod nvme_fabrics 00:21:17.750 rmmod nvme_keyring 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1268234 ']' 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1268234 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1268234 ']' 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1268234 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1268234 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1268234' 00:21:17.750 killing process with pid 1268234 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1268234 00:21:17.750 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1268234 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.010 10:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.917 10:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:19.917 00:21:19.917 real 0m13.156s 00:21:19.917 user 0m19.199s 00:21:19.917 sys 0m2.693s 00:21:19.917 10:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.917 10:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.917 ************************************ 00:21:19.917 END TEST nvmf_host_discovery 00:21:19.917 ************************************ 00:21:19.917 10:38:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:19.917 10:38:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:19.917 10:38:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:19.917 10:38:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.917 10:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.175 ************************************ 00:21:20.175 START TEST nvmf_host_multipath_status 00:21:20.175 ************************************ 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:20.175 * Looking for test storage... 00:21:20.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.175 10:38:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:22.706 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:22.706 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:22.706 Found net devices under 0000:09:00.0: cvl_0_0 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:22.706 Found net devices under 0000:09:00.1: cvl_0_1 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.706 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:21:22.706 00:21:22.706 --- 10.0.0.2 ping statistics --- 00:21:22.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.706 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:21:22.707 00:21:22.707 --- 10.0.0.1 ping statistics --- 00:21:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.707 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1271384 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1271384 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1271384 ']' 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.707 10:38:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.707 [2024-07-15 10:38:10.858745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:22.707 [2024-07-15 10:38:10.858851] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.707 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.707 [2024-07-15 10:38:10.922090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:22.707 [2024-07-15 10:38:11.036462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.707 [2024-07-15 10:38:11.036516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.707 [2024-07-15 10:38:11.036530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.707 [2024-07-15 10:38:11.036541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.707 [2024-07-15 10:38:11.036551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.707 [2024-07-15 10:38:11.036631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.707 [2024-07-15 10:38:11.036635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1271384 00:21:22.707 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:22.965 [2024-07-15 10:38:11.452040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.965 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:23.223 Malloc0 00:21:23.223 10:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:23.481 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.769 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.054 [2024-07-15 10:38:12.521037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.054 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:24.313 [2024-07-15 10:38:12.781671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1271585 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1271585 /var/tmp/bdevperf.sock 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1271585 ']' 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.313 10:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:24.880 10:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.880 10:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:24.880 10:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:24.880 10:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:25.445 Nvme0n1 00:21:25.445 10:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:25.702 Nvme0n1 00:21:25.702 10:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:25.702 10:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.229 10:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:28.229 10:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:28.229 10:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:28.229 10:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:29.603 10:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:29.603 10:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:29.603 10:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.603 10:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:29.603 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.603 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:29.603 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.603 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.861 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:29.861 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.861 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.861 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:30.119 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.119 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:30.119 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.119 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:30.376 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.376 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:30.376 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.376 10:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:30.633 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.633 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:30.633 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.633 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:30.891 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.891 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:30.891 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:31.149 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:31.408 10:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:32.341 10:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:32.341 10:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:32.341 10:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.341 10:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:32.598 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.598 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:32.599 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.599 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:32.857 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.857 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:32.857 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.857 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:33.115 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.115 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:33.115 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.115 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:33.374 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.374 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:33.374 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.374 10:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:33.632 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.632 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:33.632 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.632 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:33.891 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.891 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:33.891 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:34.149 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:34.409 10:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:35.347 10:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:35.347 10:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:35.347 10:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.347 10:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:35.606 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.606 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:35.606 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.606 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:35.865 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:35.865 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:35.865 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.865 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:36.123 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.123 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:36.123 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.123 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:36.382 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.382 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:36.382 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.382 10:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:36.640 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.640 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:36.640 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.640 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:36.899 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.899 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:36.899 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:37.157 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:37.415 10:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:38.351 10:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:38.351 10:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:38.351 10:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.351 10:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:38.609 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.609 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:38.609 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.609 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:38.867 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:38.867 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:38.867 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.867 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:39.125 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.125 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:39.125 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.125 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:39.383 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.383 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:39.383 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.384 10:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:39.642 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.642 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:39.642 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.642 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:39.900 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.900 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:39.900 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:40.157 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:40.415 10:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:41.350 10:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:41.350 10:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:41.350 10:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.350 10:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:41.609 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:41.609 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:41.609 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.609 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:41.867 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:41.867 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:41.867 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.867 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:42.125 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.125 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:42.125 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.125 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:42.384 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.384 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:42.384 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.384 10:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:42.643 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.643 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:42.643 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.643 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:42.901 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.901 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:42.901 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:43.159 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:43.418 10:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:44.353 10:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:44.353 10:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:44.353 10:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.353 10:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:44.611 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.611 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:44.611 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.611 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:44.868 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.868 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:44.868 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.869 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:45.125 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.125 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:45.125 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.125 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:45.391 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.391 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:45.391 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.391 10:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:45.695 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.695 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:45.695 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.695 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:45.957 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.958 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:46.216 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:46.216 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:46.473 10:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:46.731 10:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:47.662 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:47.662 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:47.662 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.662 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:47.919 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.919 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:47.919 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.919 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:48.177 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.177 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:48.177 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.177 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:48.434 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.435 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:48.435 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.435 10:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:48.692 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.692 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:48.692 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.692 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:48.950 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.950 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:48.950 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.950 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:49.208 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.208 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:49.208 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:49.465 10:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:49.725 10:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:50.662 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:50.662 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:50.662 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.662 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:50.923 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:50.923 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:50.923 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.923 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:51.179 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.179 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:51.179 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.179 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:51.436 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.436 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:51.436 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.436 10:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:51.694 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.694 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:51.694 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.694 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:51.951 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.951 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:51.951 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.951 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:52.207 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.207 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:52.207 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:52.464 10:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:52.740 10:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:53.675 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:53.675 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:53.675 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.675 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:53.933 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.933 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:53.933 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.933 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:54.190 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.190 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:54.190 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.190 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:54.448 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.448 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:54.448 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.448 10:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:54.706 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.706 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:54.706 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.706 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:54.963 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.963 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:54.963 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.963 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:55.221 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.221 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:55.221 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:55.479 10:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:55.738 10:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:56.673 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:56.673 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.673 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.673 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.931 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.931 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.931 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.931 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:57.189 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:57.189 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:57.189 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.189 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:57.448 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.448 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:57.448 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.448 10:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.706 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.706 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.706 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.706 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:57.964 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.964 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:57.965 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.965 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1271585 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1271585 ']' 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1271585 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.223 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1271585 00:21:58.481 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:58.481 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:58.481 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1271585' 00:21:58.481 killing process with pid 1271585 00:21:58.481 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1271585 00:21:58.481 10:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1271585 00:21:58.481 Connection closed with partial response: 00:21:58.481 00:21:58.481 00:21:58.744 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1271585 00:21:58.744 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:58.744 [2024-07-15 10:38:12.840834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:58.744 [2024-07-15 10:38:12.840935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271585 ] 00:21:58.744 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.744 [2024-07-15 10:38:12.901575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.744 [2024-07-15 10:38:13.012960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.744 Running I/O for 90 seconds... 00:21:58.744 [2024-07-15 10:38:28.609900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.609964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.610624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.610640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.612972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.612999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.613017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.613044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.613061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.613103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.613120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.613146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.613162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 10:38:28.613187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 10:38:28.613203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 10:38:28.613712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.613958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.613979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.614961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.614989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 10:38:28.615361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 10:38:28.615378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:28.615421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:28.615464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:28.615506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:28.615550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:28.615867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:28.615884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.198977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.198993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 10:38:44.199072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 10:38:44.199687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 10:38:44.199705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.199726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.199742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.199764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.199793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.199826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.199843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.199882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.199899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.199921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.199938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.199969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.199986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.200065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.200119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.200350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.200366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.202900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.202949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.202971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.202987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.203600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.203647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 10:38:44.203687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 10:38:44.203880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 10:38:44.203896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.203919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.203936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.203958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.203975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.203997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.204576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.204631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.204670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.204724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.204745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.205973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.205991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.206013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.206029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.206051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.206067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.206106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 10:38:44.206123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.206146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.206162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.206185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 10:38:44.206202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 10:38:44.206229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.206708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.206749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.206788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.206860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.206883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.206917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.209717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.209975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.209998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 10:38:44.210417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.210455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 10:38:44.210477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 10:38:44.210493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.212794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.212828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.212857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.212876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.212900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.212917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.212940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.212958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.212980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.212998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.213531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.213994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.214011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.214186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.214465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.214513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.214539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.214557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.215511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.215547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 10:38:44.215588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 10:38:44.215645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 10:38:44.215665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.215702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.215737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.215863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.215904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.215942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.215964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.215981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.216328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.216985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.217762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.217786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.217811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.219567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 10:38:44.219590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.219615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.219632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.219655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.219671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.219692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 10:38:44.219708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 10:38:44.219729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.219745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.219765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.219781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.219809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.219846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.219869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.219885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.219907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.219924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.219946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.219961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.219983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.220140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.220324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.220361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.220398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.220568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.220584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.221820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.221854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.221881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.221900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.221923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.221955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.221979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.222015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.222038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.222065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.222101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.222117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.222137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.222152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.222173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.222189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.222210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.222226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.223522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.223585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.223625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.223663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.223978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.223997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.224037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.224094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.224135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.224175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.224215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.224256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.224296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 10:38:44.224351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 10:38:44.224390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 10:38:44.224411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.224480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.224669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.224706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.224742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.224793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.224962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.224978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.225000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.225017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.225039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.225065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.225087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.225122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.227891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.227953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.227970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.228008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.228024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.228045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.228075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.228115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.228132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.228169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.228186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.228208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.229692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.229717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.229758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.229777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.229822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.229846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.229885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.229904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.229927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.229944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.229967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.229984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.230007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 10:38:44.230024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.230047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.230064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 10:38:44.230098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 10:38:44.230114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.230940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.230978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.230998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.231014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.231035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.231051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.231072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.231088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.231109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.231143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.231165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.231181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.231202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.231217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.233619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.233665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.233704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.233745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.233784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.233845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.233901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.233979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.233995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.234337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.234414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.234467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.754 [2024-07-15 10:38:44.234563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.234601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.234637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.754 [2024-07-15 10:38:44.234658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.754 [2024-07-15 10:38:44.234673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.234710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.234746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.234871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.234911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.234951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.234974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.234990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.235323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.235339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.236564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.236619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.236660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.236701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.236926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.236966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.236988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.237005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.237045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.237108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.237148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.237187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.237226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.237270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.237292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.237309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.238271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.755 [2024-07-15 10:38:44.238361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.755 [2024-07-15 10:38:44.238437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.755 [2024-07-15 10:38:44.238460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.238482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.238506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.238523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.238561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.238579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.238602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.238619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.238642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.238659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.238682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.238699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.239954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.239977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.239995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.240018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.240035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.241957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.241980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.241996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.242567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.242680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.242701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.756 [2024-07-15 10:38:44.252953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.255013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.756 [2024-07-15 10:38:44.255042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.756 [2024-07-15 10:38:44.255104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.757 [2024-07-15 10:38:44.255126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.757 [2024-07-15 10:38:44.255634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.757 [2024-07-15 10:38:44.255671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.757 [2024-07-15 10:38:44.255691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.757 [2024-07-15 10:38:44.255707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.757 Received shutdown signal, test time was about 32.347813 seconds 00:21:58.757 00:21:58.757 Latency(us) 00:21:58.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.757 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.757 Verification LBA range: start 0x0 length 0x4000 00:21:58.757 Nvme0n1 : 32.35 8176.85 31.94 0.00 0.00 15628.19 267.00 4026531.84 00:21:58.757 =================================================================================================================== 00:21:58.757 Total : 8176.85 31.94 0.00 0.00 15628.19 267.00 4026531.84 00:21:58.757 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.014 rmmod nvme_tcp 00:21:59.014 rmmod nvme_fabrics 00:21:59.014 rmmod nvme_keyring 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1271384 ']' 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1271384 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1271384 ']' 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1271384 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1271384 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1271384' 00:21:59.014 killing process with pid 1271384 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1271384 00:21:59.014 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1271384 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.272 10:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.179 10:38:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.179 00:22:01.179 real 0m41.180s 00:22:01.179 user 2m3.900s 00:22:01.179 sys 0m10.523s 00:22:01.179 10:38:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.179 10:38:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.179 ************************************ 00:22:01.179 END TEST nvmf_host_multipath_status 00:22:01.179 ************************************ 00:22:01.179 10:38:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:01.179 10:38:49 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:01.179 10:38:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:01.179 10:38:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.179 10:38:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.179 ************************************ 00:22:01.179 START TEST nvmf_discovery_remove_ifc 00:22:01.179 ************************************ 00:22:01.179 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:01.437 * Looking for test storage... 00:22:01.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.437 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.438 10:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:03.974 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:03.974 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:03.974 Found net devices under 0000:09:00.0: cvl_0_0 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:03.974 Found net devices under 0000:09:00.1: cvl_0_1 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.974 10:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.974 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.974 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.974 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.974 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.974 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:03.974 00:22:03.974 --- 10.0.0.2 ping statistics --- 00:22:03.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.974 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:03.974 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:03.975 00:22:03.975 --- 10.0.0.1 ping statistics --- 00:22:03.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.975 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1277783 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1277783 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1277783 ']' 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.975 [2024-07-15 10:38:52.136973] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:03.975 [2024-07-15 10:38:52.137040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.975 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.975 [2024-07-15 10:38:52.196472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.975 [2024-07-15 10:38:52.301430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.975 [2024-07-15 10:38:52.301479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.975 [2024-07-15 10:38:52.301502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.975 [2024-07-15 10:38:52.301513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.975 [2024-07-15 10:38:52.301523] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.975 [2024-07-15 10:38:52.301547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.975 [2024-07-15 10:38:52.446547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.975 [2024-07-15 10:38:52.454702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:03.975 null0 00:22:03.975 [2024-07-15 10:38:52.486672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1277921 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1277921 /tmp/host.sock 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1277921 ']' 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:03.975 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:03.975 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.235 [2024-07-15 10:38:52.554476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:04.235 [2024-07-15 10:38:52.554567] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277921 ] 00:22:04.235 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.235 [2024-07-15 10:38:52.613948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.235 [2024-07-15 10:38:52.721007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.235 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.495 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.495 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:04.495 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.495 10:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.431 [2024-07-15 10:38:53.914435] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:05.431 [2024-07-15 10:38:53.914468] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:05.431 [2024-07-15 10:38:53.914491] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.691 [2024-07-15 10:38:54.041927] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:05.951 [2024-07-15 10:38:54.265797] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:05.951 [2024-07-15 10:38:54.265888] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:05.951 [2024-07-15 10:38:54.265932] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:05.951 [2024-07-15 10:38:54.265955] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.951 [2024-07-15 10:38:54.265993] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.951 [2024-07-15 10:38:54.273047] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1695870 was disconnected and freed. delete nvme_qpair. 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.951 10:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.913 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.913 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.913 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.913 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.914 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:06.914 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.914 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.914 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.194 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.194 10:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.132 10:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.071 10:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.008 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.267 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.267 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:10.267 10:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:11.205 10:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.205 [2024-07-15 10:38:59.707266] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:11.205 [2024-07-15 10:38:59.707334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.205 [2024-07-15 10:38:59.707357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.205 [2024-07-15 10:38:59.707383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.205 [2024-07-15 10:38:59.707397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.205 [2024-07-15 10:38:59.707429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.205 [2024-07-15 10:38:59.707452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.205 [2024-07-15 10:38:59.707466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.205 [2024-07-15 10:38:59.707479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.205 [2024-07-15 10:38:59.707492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.205 [2024-07-15 10:38:59.707504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.205 [2024-07-15 10:38:59.707517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c300 is same with the state(5) to be set 00:22:11.205 [2024-07-15 10:38:59.717282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c300 (9): Bad file descriptor 00:22:11.205 [2024-07-15 10:38:59.727325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.142 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.402 [2024-07-15 10:39:00.732849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:12.402 [2024-07-15 10:39:00.732917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c300 with addr=10.0.0.2, port=4420 00:22:12.402 [2024-07-15 10:39:00.732946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c300 is same with the state(5) to be set 00:22:12.402 [2024-07-15 10:39:00.733003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c300 (9): Bad file descriptor 00:22:12.402 [2024-07-15 10:39:00.733482] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.402 [2024-07-15 10:39:00.733511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:12.402 [2024-07-15 10:39:00.733527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:12.402 [2024-07-15 10:39:00.733543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:12.402 [2024-07-15 10:39:00.733574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.402 [2024-07-15 10:39:00.733591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:12.402 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.402 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:12.402 10:39:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:13.337 [2024-07-15 10:39:01.736095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.337 [2024-07-15 10:39:01.736133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.337 [2024-07-15 10:39:01.736148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.337 [2024-07-15 10:39:01.736174] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:13.337 [2024-07-15 10:39:01.736193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.337 [2024-07-15 10:39:01.736225] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:13.337 [2024-07-15 10:39:01.736274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.337 [2024-07-15 10:39:01.736295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.337 [2024-07-15 10:39:01.736315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.337 [2024-07-15 10:39:01.736328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.337 [2024-07-15 10:39:01.736343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.337 [2024-07-15 10:39:01.736357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.337 [2024-07-15 10:39:01.736371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.337 [2024-07-15 10:39:01.736385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.337 [2024-07-15 10:39:01.736400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.337 [2024-07-15 10:39:01.736413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.337 [2024-07-15 10:39:01.736427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:13.337 [2024-07-15 10:39:01.736572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165b780 (9): Bad file descriptor 00:22:13.337 [2024-07-15 10:39:01.737592] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:13.337 [2024-07-15 10:39:01.737615] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:13.337 10:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:14.715 10:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:15.283 [2024-07-15 10:39:03.749705] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:15.283 [2024-07-15 10:39:03.749746] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:15.283 [2024-07-15 10:39:03.749771] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.540 [2024-07-15 10:39:03.837031] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:15.540 [2024-07-15 10:39:03.900598] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:15.540 [2024-07-15 10:39:03.900645] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:15.540 [2024-07-15 10:39:03.900678] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:15.540 [2024-07-15 10:39:03.900699] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:15.540 [2024-07-15 10:39:03.900712] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:15.540 [2024-07-15 10:39:03.908438] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1663110 was disconnected and freed. delete nvme_qpair. 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1277921 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1277921 ']' 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1277921 00:22:15.540 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1277921 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1277921' 00:22:15.541 killing process with pid 1277921 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1277921 00:22:15.541 10:39:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1277921 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.800 rmmod nvme_tcp 00:22:15.800 rmmod nvme_fabrics 00:22:15.800 rmmod nvme_keyring 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1277783 ']' 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1277783 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1277783 ']' 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1277783 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1277783 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1277783' 00:22:15.800 killing process with pid 1277783 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1277783 00:22:15.800 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1277783 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.058 10:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.596 10:39:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.596 00:22:18.596 real 0m16.881s 00:22:18.596 user 0m23.813s 00:22:18.596 sys 0m3.048s 00:22:18.596 10:39:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.596 10:39:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 ************************************ 00:22:18.596 END TEST nvmf_discovery_remove_ifc 00:22:18.596 ************************************ 00:22:18.596 10:39:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:18.596 10:39:06 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:18.596 10:39:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:18.596 10:39:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.596 10:39:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.596 ************************************ 00:22:18.596 START TEST nvmf_identify_kernel_target 00:22:18.596 ************************************ 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:18.596 * Looking for test storage... 00:22:18.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.596 10:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:20.493 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.493 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:20.494 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:20.494 Found net devices under 0000:09:00.0: cvl_0_0 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:20.494 Found net devices under 0000:09:00.1: cvl_0_1 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:20.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:20.494 00:22:20.494 --- 10.0.0.2 ping statistics --- 00:22:20.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.494 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:22:20.494 00:22:20.494 --- 10.0.0.1 ping statistics --- 00:22:20.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.494 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:20.494 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:20.495 10:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:21.867 Waiting for block devices as requested 00:22:21.867 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:21.867 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:21.867 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:21.867 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:22.126 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:22.126 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:22.126 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:22.126 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:22.386 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:22.386 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:22.386 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:22.646 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:22.646 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:22.646 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:22.905 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:22.905 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:22.905 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:23.165 No valid GPT data, bailing 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:23.165 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:23.165 00:22:23.165 Discovery Log Number of Records 2, Generation counter 2 00:22:23.165 =====Discovery Log Entry 0====== 00:22:23.165 trtype: tcp 00:22:23.165 adrfam: ipv4 00:22:23.165 subtype: current discovery subsystem 00:22:23.165 treq: not specified, sq flow control disable supported 00:22:23.165 portid: 1 00:22:23.165 trsvcid: 4420 00:22:23.165 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:23.165 traddr: 10.0.0.1 00:22:23.165 eflags: none 00:22:23.165 sectype: none 00:22:23.165 =====Discovery Log Entry 1====== 00:22:23.165 trtype: tcp 00:22:23.165 adrfam: ipv4 00:22:23.165 subtype: nvme subsystem 00:22:23.165 treq: not specified, sq flow control disable supported 00:22:23.165 portid: 1 00:22:23.165 trsvcid: 4420 00:22:23.166 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:23.166 traddr: 10.0.0.1 00:22:23.166 eflags: none 00:22:23.166 sectype: none 00:22:23.166 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:23.166 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:23.166 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.426 ===================================================== 00:22:23.426 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:23.426 ===================================================== 00:22:23.426 Controller Capabilities/Features 00:22:23.426 ================================ 00:22:23.426 Vendor ID: 0000 00:22:23.426 Subsystem Vendor ID: 0000 00:22:23.426 Serial Number: b3bb8ffd7f8c1c358cd2 00:22:23.426 Model Number: Linux 00:22:23.426 Firmware Version: 6.7.0-68 00:22:23.426 Recommended Arb Burst: 0 00:22:23.426 IEEE OUI Identifier: 00 00 00 00:22:23.426 Multi-path I/O 00:22:23.426 May have multiple subsystem ports: No 00:22:23.426 May have multiple controllers: No 00:22:23.426 Associated with SR-IOV VF: No 00:22:23.426 Max Data Transfer Size: Unlimited 00:22:23.426 Max Number of Namespaces: 0 00:22:23.426 Max Number of I/O Queues: 1024 00:22:23.426 NVMe Specification Version (VS): 1.3 00:22:23.426 NVMe Specification Version (Identify): 1.3 00:22:23.426 Maximum Queue Entries: 1024 00:22:23.426 Contiguous Queues Required: No 00:22:23.426 Arbitration Mechanisms Supported 00:22:23.426 Weighted Round Robin: Not Supported 00:22:23.426 Vendor Specific: Not Supported 00:22:23.426 Reset Timeout: 7500 ms 00:22:23.426 Doorbell Stride: 4 bytes 00:22:23.426 NVM Subsystem Reset: Not Supported 00:22:23.426 Command Sets Supported 00:22:23.426 NVM Command Set: Supported 00:22:23.426 Boot Partition: Not Supported 00:22:23.426 Memory Page Size Minimum: 4096 bytes 00:22:23.426 Memory Page Size Maximum: 4096 bytes 00:22:23.426 Persistent Memory Region: Not Supported 00:22:23.426 Optional Asynchronous Events Supported 00:22:23.426 Namespace Attribute Notices: Not Supported 00:22:23.426 Firmware Activation Notices: Not Supported 00:22:23.426 ANA Change Notices: Not Supported 00:22:23.426 PLE Aggregate Log Change Notices: Not Supported 00:22:23.426 LBA Status Info Alert Notices: Not Supported 00:22:23.426 EGE Aggregate Log Change Notices: Not Supported 00:22:23.426 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.426 Zone Descriptor Change Notices: Not Supported 00:22:23.426 Discovery Log Change Notices: Supported 00:22:23.426 Controller Attributes 00:22:23.426 128-bit Host Identifier: Not Supported 00:22:23.426 Non-Operational Permissive Mode: Not Supported 00:22:23.426 NVM Sets: Not Supported 00:22:23.426 Read Recovery Levels: Not Supported 00:22:23.426 Endurance Groups: Not Supported 00:22:23.426 Predictable Latency Mode: Not Supported 00:22:23.426 Traffic Based Keep ALive: Not Supported 00:22:23.426 Namespace Granularity: Not Supported 00:22:23.426 SQ Associations: Not Supported 00:22:23.426 UUID List: Not Supported 00:22:23.426 Multi-Domain Subsystem: Not Supported 00:22:23.426 Fixed Capacity Management: Not Supported 00:22:23.426 Variable Capacity Management: Not Supported 00:22:23.426 Delete Endurance Group: Not Supported 00:22:23.426 Delete NVM Set: Not Supported 00:22:23.426 Extended LBA Formats Supported: Not Supported 00:22:23.426 Flexible Data Placement Supported: Not Supported 00:22:23.426 00:22:23.426 Controller Memory Buffer Support 00:22:23.426 ================================ 00:22:23.426 Supported: No 00:22:23.426 00:22:23.426 Persistent Memory Region Support 00:22:23.426 ================================ 00:22:23.426 Supported: No 00:22:23.426 00:22:23.426 Admin Command Set Attributes 00:22:23.426 ============================ 00:22:23.426 Security Send/Receive: Not Supported 00:22:23.426 Format NVM: Not Supported 00:22:23.426 Firmware Activate/Download: Not Supported 00:22:23.426 Namespace Management: Not Supported 00:22:23.426 Device Self-Test: Not Supported 00:22:23.426 Directives: Not Supported 00:22:23.426 NVMe-MI: Not Supported 00:22:23.426 Virtualization Management: Not Supported 00:22:23.426 Doorbell Buffer Config: Not Supported 00:22:23.426 Get LBA Status Capability: Not Supported 00:22:23.426 Command & Feature Lockdown Capability: Not Supported 00:22:23.426 Abort Command Limit: 1 00:22:23.426 Async Event Request Limit: 1 00:22:23.426 Number of Firmware Slots: N/A 00:22:23.426 Firmware Slot 1 Read-Only: N/A 00:22:23.426 Firmware Activation Without Reset: N/A 00:22:23.426 Multiple Update Detection Support: N/A 00:22:23.426 Firmware Update Granularity: No Information Provided 00:22:23.426 Per-Namespace SMART Log: No 00:22:23.426 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.426 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:23.426 Command Effects Log Page: Not Supported 00:22:23.426 Get Log Page Extended Data: Supported 00:22:23.426 Telemetry Log Pages: Not Supported 00:22:23.427 Persistent Event Log Pages: Not Supported 00:22:23.427 Supported Log Pages Log Page: May Support 00:22:23.427 Commands Supported & Effects Log Page: Not Supported 00:22:23.427 Feature Identifiers & Effects Log Page:May Support 00:22:23.427 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.427 Data Area 4 for Telemetry Log: Not Supported 00:22:23.427 Error Log Page Entries Supported: 1 00:22:23.427 Keep Alive: Not Supported 00:22:23.427 00:22:23.427 NVM Command Set Attributes 00:22:23.427 ========================== 00:22:23.427 Submission Queue Entry Size 00:22:23.427 Max: 1 00:22:23.427 Min: 1 00:22:23.427 Completion Queue Entry Size 00:22:23.427 Max: 1 00:22:23.427 Min: 1 00:22:23.427 Number of Namespaces: 0 00:22:23.427 Compare Command: Not Supported 00:22:23.427 Write Uncorrectable Command: Not Supported 00:22:23.427 Dataset Management Command: Not Supported 00:22:23.427 Write Zeroes Command: Not Supported 00:22:23.427 Set Features Save Field: Not Supported 00:22:23.427 Reservations: Not Supported 00:22:23.427 Timestamp: Not Supported 00:22:23.427 Copy: Not Supported 00:22:23.427 Volatile Write Cache: Not Present 00:22:23.427 Atomic Write Unit (Normal): 1 00:22:23.427 Atomic Write Unit (PFail): 1 00:22:23.427 Atomic Compare & Write Unit: 1 00:22:23.427 Fused Compare & Write: Not Supported 00:22:23.427 Scatter-Gather List 00:22:23.427 SGL Command Set: Supported 00:22:23.427 SGL Keyed: Not Supported 00:22:23.427 SGL Bit Bucket Descriptor: Not Supported 00:22:23.427 SGL Metadata Pointer: Not Supported 00:22:23.427 Oversized SGL: Not Supported 00:22:23.427 SGL Metadata Address: Not Supported 00:22:23.427 SGL Offset: Supported 00:22:23.427 Transport SGL Data Block: Not Supported 00:22:23.427 Replay Protected Memory Block: Not Supported 00:22:23.427 00:22:23.427 Firmware Slot Information 00:22:23.427 ========================= 00:22:23.427 Active slot: 0 00:22:23.427 00:22:23.427 00:22:23.427 Error Log 00:22:23.427 ========= 00:22:23.427 00:22:23.427 Active Namespaces 00:22:23.427 ================= 00:22:23.427 Discovery Log Page 00:22:23.427 ================== 00:22:23.427 Generation Counter: 2 00:22:23.427 Number of Records: 2 00:22:23.427 Record Format: 0 00:22:23.427 00:22:23.427 Discovery Log Entry 0 00:22:23.427 ---------------------- 00:22:23.427 Transport Type: 3 (TCP) 00:22:23.427 Address Family: 1 (IPv4) 00:22:23.427 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:23.427 Entry Flags: 00:22:23.427 Duplicate Returned Information: 0 00:22:23.427 Explicit Persistent Connection Support for Discovery: 0 00:22:23.427 Transport Requirements: 00:22:23.427 Secure Channel: Not Specified 00:22:23.427 Port ID: 1 (0x0001) 00:22:23.427 Controller ID: 65535 (0xffff) 00:22:23.427 Admin Max SQ Size: 32 00:22:23.427 Transport Service Identifier: 4420 00:22:23.427 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:23.427 Transport Address: 10.0.0.1 00:22:23.427 Discovery Log Entry 1 00:22:23.427 ---------------------- 00:22:23.427 Transport Type: 3 (TCP) 00:22:23.427 Address Family: 1 (IPv4) 00:22:23.427 Subsystem Type: 2 (NVM Subsystem) 00:22:23.427 Entry Flags: 00:22:23.427 Duplicate Returned Information: 0 00:22:23.427 Explicit Persistent Connection Support for Discovery: 0 00:22:23.427 Transport Requirements: 00:22:23.427 Secure Channel: Not Specified 00:22:23.427 Port ID: 1 (0x0001) 00:22:23.427 Controller ID: 65535 (0xffff) 00:22:23.427 Admin Max SQ Size: 32 00:22:23.427 Transport Service Identifier: 4420 00:22:23.427 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:23.427 Transport Address: 10.0.0.1 00:22:23.427 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:23.427 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.427 get_feature(0x01) failed 00:22:23.427 get_feature(0x02) failed 00:22:23.427 get_feature(0x04) failed 00:22:23.427 ===================================================== 00:22:23.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:23.427 ===================================================== 00:22:23.427 Controller Capabilities/Features 00:22:23.427 ================================ 00:22:23.427 Vendor ID: 0000 00:22:23.427 Subsystem Vendor ID: 0000 00:22:23.427 Serial Number: 218d1957157df19b66ae 00:22:23.427 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:23.427 Firmware Version: 6.7.0-68 00:22:23.427 Recommended Arb Burst: 6 00:22:23.427 IEEE OUI Identifier: 00 00 00 00:22:23.427 Multi-path I/O 00:22:23.427 May have multiple subsystem ports: Yes 00:22:23.427 May have multiple controllers: Yes 00:22:23.427 Associated with SR-IOV VF: No 00:22:23.427 Max Data Transfer Size: Unlimited 00:22:23.427 Max Number of Namespaces: 1024 00:22:23.427 Max Number of I/O Queues: 128 00:22:23.427 NVMe Specification Version (VS): 1.3 00:22:23.427 NVMe Specification Version (Identify): 1.3 00:22:23.427 Maximum Queue Entries: 1024 00:22:23.427 Contiguous Queues Required: No 00:22:23.427 Arbitration Mechanisms Supported 00:22:23.427 Weighted Round Robin: Not Supported 00:22:23.427 Vendor Specific: Not Supported 00:22:23.427 Reset Timeout: 7500 ms 00:22:23.427 Doorbell Stride: 4 bytes 00:22:23.427 NVM Subsystem Reset: Not Supported 00:22:23.427 Command Sets Supported 00:22:23.427 NVM Command Set: Supported 00:22:23.427 Boot Partition: Not Supported 00:22:23.427 Memory Page Size Minimum: 4096 bytes 00:22:23.427 Memory Page Size Maximum: 4096 bytes 00:22:23.427 Persistent Memory Region: Not Supported 00:22:23.427 Optional Asynchronous Events Supported 00:22:23.427 Namespace Attribute Notices: Supported 00:22:23.427 Firmware Activation Notices: Not Supported 00:22:23.427 ANA Change Notices: Supported 00:22:23.427 PLE Aggregate Log Change Notices: Not Supported 00:22:23.427 LBA Status Info Alert Notices: Not Supported 00:22:23.427 EGE Aggregate Log Change Notices: Not Supported 00:22:23.427 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.427 Zone Descriptor Change Notices: Not Supported 00:22:23.427 Discovery Log Change Notices: Not Supported 00:22:23.427 Controller Attributes 00:22:23.428 128-bit Host Identifier: Supported 00:22:23.428 Non-Operational Permissive Mode: Not Supported 00:22:23.428 NVM Sets: Not Supported 00:22:23.428 Read Recovery Levels: Not Supported 00:22:23.428 Endurance Groups: Not Supported 00:22:23.428 Predictable Latency Mode: Not Supported 00:22:23.428 Traffic Based Keep ALive: Supported 00:22:23.428 Namespace Granularity: Not Supported 00:22:23.428 SQ Associations: Not Supported 00:22:23.428 UUID List: Not Supported 00:22:23.428 Multi-Domain Subsystem: Not Supported 00:22:23.428 Fixed Capacity Management: Not Supported 00:22:23.428 Variable Capacity Management: Not Supported 00:22:23.428 Delete Endurance Group: Not Supported 00:22:23.428 Delete NVM Set: Not Supported 00:22:23.428 Extended LBA Formats Supported: Not Supported 00:22:23.428 Flexible Data Placement Supported: Not Supported 00:22:23.428 00:22:23.428 Controller Memory Buffer Support 00:22:23.428 ================================ 00:22:23.428 Supported: No 00:22:23.428 00:22:23.428 Persistent Memory Region Support 00:22:23.428 ================================ 00:22:23.428 Supported: No 00:22:23.428 00:22:23.428 Admin Command Set Attributes 00:22:23.428 ============================ 00:22:23.428 Security Send/Receive: Not Supported 00:22:23.428 Format NVM: Not Supported 00:22:23.428 Firmware Activate/Download: Not Supported 00:22:23.428 Namespace Management: Not Supported 00:22:23.428 Device Self-Test: Not Supported 00:22:23.428 Directives: Not Supported 00:22:23.428 NVMe-MI: Not Supported 00:22:23.428 Virtualization Management: Not Supported 00:22:23.428 Doorbell Buffer Config: Not Supported 00:22:23.428 Get LBA Status Capability: Not Supported 00:22:23.428 Command & Feature Lockdown Capability: Not Supported 00:22:23.428 Abort Command Limit: 4 00:22:23.428 Async Event Request Limit: 4 00:22:23.428 Number of Firmware Slots: N/A 00:22:23.428 Firmware Slot 1 Read-Only: N/A 00:22:23.428 Firmware Activation Without Reset: N/A 00:22:23.428 Multiple Update Detection Support: N/A 00:22:23.428 Firmware Update Granularity: No Information Provided 00:22:23.428 Per-Namespace SMART Log: Yes 00:22:23.428 Asymmetric Namespace Access Log Page: Supported 00:22:23.428 ANA Transition Time : 10 sec 00:22:23.428 00:22:23.428 Asymmetric Namespace Access Capabilities 00:22:23.428 ANA Optimized State : Supported 00:22:23.428 ANA Non-Optimized State : Supported 00:22:23.428 ANA Inaccessible State : Supported 00:22:23.428 ANA Persistent Loss State : Supported 00:22:23.428 ANA Change State : Supported 00:22:23.428 ANAGRPID is not changed : No 00:22:23.428 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:23.428 00:22:23.428 ANA Group Identifier Maximum : 128 00:22:23.428 Number of ANA Group Identifiers : 128 00:22:23.428 Max Number of Allowed Namespaces : 1024 00:22:23.428 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:23.428 Command Effects Log Page: Supported 00:22:23.428 Get Log Page Extended Data: Supported 00:22:23.428 Telemetry Log Pages: Not Supported 00:22:23.428 Persistent Event Log Pages: Not Supported 00:22:23.428 Supported Log Pages Log Page: May Support 00:22:23.428 Commands Supported & Effects Log Page: Not Supported 00:22:23.428 Feature Identifiers & Effects Log Page:May Support 00:22:23.428 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.428 Data Area 4 for Telemetry Log: Not Supported 00:22:23.428 Error Log Page Entries Supported: 128 00:22:23.428 Keep Alive: Supported 00:22:23.428 Keep Alive Granularity: 1000 ms 00:22:23.428 00:22:23.428 NVM Command Set Attributes 00:22:23.428 ========================== 00:22:23.428 Submission Queue Entry Size 00:22:23.428 Max: 64 00:22:23.428 Min: 64 00:22:23.428 Completion Queue Entry Size 00:22:23.428 Max: 16 00:22:23.428 Min: 16 00:22:23.428 Number of Namespaces: 1024 00:22:23.428 Compare Command: Not Supported 00:22:23.428 Write Uncorrectable Command: Not Supported 00:22:23.428 Dataset Management Command: Supported 00:22:23.428 Write Zeroes Command: Supported 00:22:23.428 Set Features Save Field: Not Supported 00:22:23.428 Reservations: Not Supported 00:22:23.428 Timestamp: Not Supported 00:22:23.428 Copy: Not Supported 00:22:23.428 Volatile Write Cache: Present 00:22:23.428 Atomic Write Unit (Normal): 1 00:22:23.428 Atomic Write Unit (PFail): 1 00:22:23.428 Atomic Compare & Write Unit: 1 00:22:23.428 Fused Compare & Write: Not Supported 00:22:23.428 Scatter-Gather List 00:22:23.428 SGL Command Set: Supported 00:22:23.428 SGL Keyed: Not Supported 00:22:23.428 SGL Bit Bucket Descriptor: Not Supported 00:22:23.428 SGL Metadata Pointer: Not Supported 00:22:23.428 Oversized SGL: Not Supported 00:22:23.428 SGL Metadata Address: Not Supported 00:22:23.428 SGL Offset: Supported 00:22:23.428 Transport SGL Data Block: Not Supported 00:22:23.428 Replay Protected Memory Block: Not Supported 00:22:23.428 00:22:23.428 Firmware Slot Information 00:22:23.428 ========================= 00:22:23.428 Active slot: 0 00:22:23.428 00:22:23.428 Asymmetric Namespace Access 00:22:23.428 =========================== 00:22:23.428 Change Count : 0 00:22:23.428 Number of ANA Group Descriptors : 1 00:22:23.428 ANA Group Descriptor : 0 00:22:23.428 ANA Group ID : 1 00:22:23.428 Number of NSID Values : 1 00:22:23.428 Change Count : 0 00:22:23.428 ANA State : 1 00:22:23.428 Namespace Identifier : 1 00:22:23.428 00:22:23.428 Commands Supported and Effects 00:22:23.428 ============================== 00:22:23.428 Admin Commands 00:22:23.428 -------------- 00:22:23.428 Get Log Page (02h): Supported 00:22:23.428 Identify (06h): Supported 00:22:23.428 Abort (08h): Supported 00:22:23.428 Set Features (09h): Supported 00:22:23.428 Get Features (0Ah): Supported 00:22:23.428 Asynchronous Event Request (0Ch): Supported 00:22:23.428 Keep Alive (18h): Supported 00:22:23.428 I/O Commands 00:22:23.428 ------------ 00:22:23.428 Flush (00h): Supported 00:22:23.428 Write (01h): Supported LBA-Change 00:22:23.428 Read (02h): Supported 00:22:23.428 Write Zeroes (08h): Supported LBA-Change 00:22:23.428 Dataset Management (09h): Supported 00:22:23.428 00:22:23.428 Error Log 00:22:23.428 ========= 00:22:23.428 Entry: 0 00:22:23.428 Error Count: 0x3 00:22:23.428 Submission Queue Id: 0x0 00:22:23.428 Command Id: 0x5 00:22:23.428 Phase Bit: 0 00:22:23.428 Status Code: 0x2 00:22:23.428 Status Code Type: 0x0 00:22:23.428 Do Not Retry: 1 00:22:23.429 Error Location: 0x28 00:22:23.429 LBA: 0x0 00:22:23.429 Namespace: 0x0 00:22:23.429 Vendor Log Page: 0x0 00:22:23.429 ----------- 00:22:23.429 Entry: 1 00:22:23.429 Error Count: 0x2 00:22:23.429 Submission Queue Id: 0x0 00:22:23.429 Command Id: 0x5 00:22:23.429 Phase Bit: 0 00:22:23.429 Status Code: 0x2 00:22:23.429 Status Code Type: 0x0 00:22:23.429 Do Not Retry: 1 00:22:23.429 Error Location: 0x28 00:22:23.429 LBA: 0x0 00:22:23.429 Namespace: 0x0 00:22:23.429 Vendor Log Page: 0x0 00:22:23.429 ----------- 00:22:23.429 Entry: 2 00:22:23.429 Error Count: 0x1 00:22:23.429 Submission Queue Id: 0x0 00:22:23.429 Command Id: 0x4 00:22:23.429 Phase Bit: 0 00:22:23.429 Status Code: 0x2 00:22:23.429 Status Code Type: 0x0 00:22:23.429 Do Not Retry: 1 00:22:23.429 Error Location: 0x28 00:22:23.429 LBA: 0x0 00:22:23.429 Namespace: 0x0 00:22:23.429 Vendor Log Page: 0x0 00:22:23.429 00:22:23.429 Number of Queues 00:22:23.429 ================ 00:22:23.429 Number of I/O Submission Queues: 128 00:22:23.429 Number of I/O Completion Queues: 128 00:22:23.429 00:22:23.429 ZNS Specific Controller Data 00:22:23.429 ============================ 00:22:23.429 Zone Append Size Limit: 0 00:22:23.429 00:22:23.429 00:22:23.429 Active Namespaces 00:22:23.429 ================= 00:22:23.429 get_feature(0x05) failed 00:22:23.429 Namespace ID:1 00:22:23.429 Command Set Identifier: NVM (00h) 00:22:23.429 Deallocate: Supported 00:22:23.429 Deallocated/Unwritten Error: Not Supported 00:22:23.429 Deallocated Read Value: Unknown 00:22:23.429 Deallocate in Write Zeroes: Not Supported 00:22:23.429 Deallocated Guard Field: 0xFFFF 00:22:23.429 Flush: Supported 00:22:23.429 Reservation: Not Supported 00:22:23.429 Namespace Sharing Capabilities: Multiple Controllers 00:22:23.429 Size (in LBAs): 1953525168 (931GiB) 00:22:23.429 Capacity (in LBAs): 1953525168 (931GiB) 00:22:23.429 Utilization (in LBAs): 1953525168 (931GiB) 00:22:23.429 UUID: c8ceeb88-06c3-4b38-86eb-76c09394f878 00:22:23.429 Thin Provisioning: Not Supported 00:22:23.429 Per-NS Atomic Units: Yes 00:22:23.429 Atomic Boundary Size (Normal): 0 00:22:23.429 Atomic Boundary Size (PFail): 0 00:22:23.429 Atomic Boundary Offset: 0 00:22:23.429 NGUID/EUI64 Never Reused: No 00:22:23.429 ANA group ID: 1 00:22:23.429 Namespace Write Protected: No 00:22:23.429 Number of LBA Formats: 1 00:22:23.429 Current LBA Format: LBA Format #00 00:22:23.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:23.429 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.429 rmmod nvme_tcp 00:22:23.429 rmmod nvme_fabrics 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.429 10:39:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.967 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.967 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:25.967 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:25.967 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:25.967 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:25.967 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:25.968 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:25.968 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:25.968 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:25.968 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:25.968 10:39:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:26.534 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:26.534 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:26.534 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:26.534 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:26.793 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:26.793 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:26.793 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:26.793 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:26.793 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:27.732 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:22:27.732 00:22:27.732 real 0m9.601s 00:22:27.732 user 0m2.013s 00:22:27.732 sys 0m3.454s 00:22:27.732 10:39:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.732 10:39:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.732 ************************************ 00:22:27.732 END TEST nvmf_identify_kernel_target 00:22:27.732 ************************************ 00:22:27.732 10:39:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:27.732 10:39:16 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:27.732 10:39:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:27.732 10:39:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.732 10:39:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.992 ************************************ 00:22:27.992 START TEST nvmf_auth_host 00:22:27.992 ************************************ 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:27.992 * Looking for test storage... 00:22:27.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.992 10:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:29.990 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:29.990 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:29.990 Found net devices under 0000:09:00.0: cvl_0_0 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.990 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:29.991 Found net devices under 0000:09:00.1: cvl_0_1 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:22:29.991 00:22:29.991 --- 10.0.0.2 ping statistics --- 00:22:29.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.991 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:22:29.991 00:22:29.991 --- 10.0.0.1 ping statistics --- 00:22:29.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.991 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1285478 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1285478 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1285478 ']' 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.991 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f3eb172af4cc8dea0ee391f699cc731f 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pn8 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f3eb172af4cc8dea0ee391f699cc731f 0 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f3eb172af4cc8dea0ee391f699cc731f 0 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f3eb172af4cc8dea0ee391f699cc731f 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pn8 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pn8 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pn8 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a66e6c8da6a416cef83507492d047c552369131db498acdb047193ef271cf80f 00:22:30.559 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fBo 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a66e6c8da6a416cef83507492d047c552369131db498acdb047193ef271cf80f 3 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a66e6c8da6a416cef83507492d047c552369131db498acdb047193ef271cf80f 3 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a66e6c8da6a416cef83507492d047c552369131db498acdb047193ef271cf80f 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fBo 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fBo 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fBo 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=da3dc0487bfefc9f91f4f7e41f3ee3b3512a3949465d542e 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oU1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key da3dc0487bfefc9f91f4f7e41f3ee3b3512a3949465d542e 0 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 da3dc0487bfefc9f91f4f7e41f3ee3b3512a3949465d542e 0 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=da3dc0487bfefc9f91f4f7e41f3ee3b3512a3949465d542e 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oU1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oU1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oU1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dad82ce18760f974d46fc3f6f59e794ec996e694d63121c4 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QO4 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dad82ce18760f974d46fc3f6f59e794ec996e694d63121c4 2 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dad82ce18760f974d46fc3f6f59e794ec996e694d63121c4 2 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dad82ce18760f974d46fc3f6f59e794ec996e694d63121c4 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:30.560 10:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QO4 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QO4 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.QO4 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=31fdafd525969250c69ed3a2f21ee88c 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Dg9 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 31fdafd525969250c69ed3a2f21ee88c 1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 31fdafd525969250c69ed3a2f21ee88c 1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=31fdafd525969250c69ed3a2f21ee88c 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Dg9 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Dg9 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Dg9 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f73742032edbe3c86bd065dd075ff53d 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uyY 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f73742032edbe3c86bd065dd075ff53d 1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f73742032edbe3c86bd065dd075ff53d 1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f73742032edbe3c86bd065dd075ff53d 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:30.560 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uyY 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uyY 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uyY 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1364ca0a46bbf03186e128769b6655715d55c4f1dbf1a992 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y4P 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1364ca0a46bbf03186e128769b6655715d55c4f1dbf1a992 2 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1364ca0a46bbf03186e128769b6655715d55c4f1dbf1a992 2 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1364ca0a46bbf03186e128769b6655715d55c4f1dbf1a992 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y4P 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y4P 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Y4P 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4cfc63863e6df35d7d35f734f120bbc 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JKs 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4cfc63863e6df35d7d35f734f120bbc 0 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4cfc63863e6df35d7d35f734f120bbc 0 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4cfc63863e6df35d7d35f734f120bbc 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JKs 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JKs 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JKs 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=417571da9d04c4255ab7b1675b7255e0d90c7983d4e6abfebda566b123e08c51 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.t72 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 417571da9d04c4255ab7b1675b7255e0d90c7983d4e6abfebda566b123e08c51 3 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 417571da9d04c4255ab7b1675b7255e0d90c7983d4e6abfebda566b123e08c51 3 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=417571da9d04c4255ab7b1675b7255e0d90c7983d4e6abfebda566b123e08c51 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.t72 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.t72 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.t72 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1285478 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1285478 ']' 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.821 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pn8 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fBo ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fBo 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oU1 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.QO4 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QO4 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Dg9 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uyY ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uyY 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Y4P 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JKs ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JKs 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.t72 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:31.080 10:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:32.455 Waiting for block devices as requested 00:22:32.455 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:32.455 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:32.455 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:32.713 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:32.713 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:32.713 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:32.713 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:32.972 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:32.972 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:32.972 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:33.232 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:33.232 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:33.232 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:33.490 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:33.490 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:33.490 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:33.490 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:34.056 No valid GPT data, bailing 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:34.056 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:34.057 00:22:34.057 Discovery Log Number of Records 2, Generation counter 2 00:22:34.057 =====Discovery Log Entry 0====== 00:22:34.057 trtype: tcp 00:22:34.057 adrfam: ipv4 00:22:34.057 subtype: current discovery subsystem 00:22:34.057 treq: not specified, sq flow control disable supported 00:22:34.057 portid: 1 00:22:34.057 trsvcid: 4420 00:22:34.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:34.057 traddr: 10.0.0.1 00:22:34.057 eflags: none 00:22:34.057 sectype: none 00:22:34.057 =====Discovery Log Entry 1====== 00:22:34.057 trtype: tcp 00:22:34.057 adrfam: ipv4 00:22:34.057 subtype: nvme subsystem 00:22:34.057 treq: not specified, sq flow control disable supported 00:22:34.057 portid: 1 00:22:34.057 trsvcid: 4420 00:22:34.057 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:34.057 traddr: 10.0.0.1 00:22:34.057 eflags: none 00:22:34.057 sectype: none 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.057 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.317 nvme0n1 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.317 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.318 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 nvme0n1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.577 10:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 nvme0n1 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.577 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.838 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.839 nvme0n1 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.839 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.840 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 nvme0n1 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.101 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.102 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.360 nvme0n1 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.360 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.619 nvme0n1 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.619 10:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.619 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.879 nvme0n1 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.879 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 nvme0n1 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.140 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.400 nvme0n1 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.400 nvme0n1 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.400 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.660 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.660 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.660 10:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.660 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.660 10:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.660 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.920 nvme0n1 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:36.920 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.921 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.181 nvme0n1 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.181 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.439 nvme0n1 00:22:37.439 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.439 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.439 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.439 10:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.439 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.439 10:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.698 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.958 nvme0n1 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.958 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 nvme0n1 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.219 10:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.788 nvme0n1 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.788 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.789 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.357 nvme0n1 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.357 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.358 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.358 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.358 10:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.358 10:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.358 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.358 10:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.927 nvme0n1 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:39.927 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.928 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.496 nvme0n1 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.496 10:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.063 nvme0n1 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.063 10:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.998 nvme0n1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.998 10:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.933 nvme0n1 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.934 10:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.500 nvme0n1 00:22:43.500 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.500 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.500 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.500 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.500 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.500 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.759 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.696 nvme0n1 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:44.696 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.697 10:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.261 nvme0n1 00:22:45.261 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.261 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.261 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.261 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.261 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.261 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.520 10:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.520 nvme0n1 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.520 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.778 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.778 nvme0n1 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.779 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.038 nvme0n1 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.038 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.297 nvme0n1 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.297 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.556 nvme0n1 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:46.556 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.557 10:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.815 nvme0n1 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:46.815 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.816 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 nvme0n1 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 nvme0n1 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.130 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.388 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.389 nvme0n1 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.389 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.646 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.647 10:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.647 nvme0n1 00:22:47.647 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.647 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.647 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.647 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.647 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.647 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.917 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.176 nvme0n1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.176 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.434 nvme0n1 00:22:48.434 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.434 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.434 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.434 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.434 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.434 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.435 10:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.692 nvme0n1 00:22:48.692 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.692 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.692 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.692 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.692 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.692 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:48.951 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.952 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.211 nvme0n1 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.211 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:49.212 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.212 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.470 nvme0n1 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.470 10:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.037 nvme0n1 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:50.037 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.038 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 nvme0n1 00:22:50.604 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.604 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.604 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.604 10:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.604 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 10:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.604 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.175 nvme0n1 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.175 10:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.744 nvme0n1 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.744 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.745 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.311 nvme0n1 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.311 10:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.315 nvme0n1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.315 10:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.884 nvme0n1 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.884 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.142 10:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.081 nvme0n1 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.081 10:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.018 nvme0n1 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.018 10:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.586 nvme0n1 00:22:56.586 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.586 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.586 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.586 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.586 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.586 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.845 nvme0n1 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.845 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:57.105 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.106 nvme0n1 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.106 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.365 nvme0n1 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.365 10:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.623 nvme0n1 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.623 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.881 nvme0n1 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.881 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.139 nvme0n1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.139 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.398 nvme0n1 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.398 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.657 nvme0n1 00:22:58.657 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.657 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.657 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.657 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.657 10:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.657 10:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.657 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.916 nvme0n1 00:22:58.916 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.916 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.917 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.177 nvme0n1 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.177 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.437 nvme0n1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.437 10:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.696 nvme0n1 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:59.696 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.954 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.214 nvme0n1 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.214 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.473 nvme0n1 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.473 10:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.731 nvme0n1 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.731 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.732 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.298 nvme0n1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.298 10:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.863 nvme0n1 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.863 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.429 nvme0n1 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:02.429 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.430 10:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.995 nvme0n1 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.995 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.563 nvme0n1 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNlYjE3MmFmNGNjOGRlYTBlZTM5MWY2OTljYzczMWaYI60i: 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY2ZTZjOGRhNmE0MTZjZWY4MzUwNzQ5MmQwNDdjNTUyMzY5MTMxZGI0OThhY2RiMDQ3MTkzZWYyNzFjZjgwZkIbOrI=: 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.563 10:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.501 nvme0n1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.501 10:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 nvme0n1 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzFmZGFmZDUyNTk2OTI1MGM2OWVkM2EyZjIxZWU4OGPMYtYu: 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjczNzQyMDMyZWRiZTNjODZiZDA2NWRkMDc1ZmY1M2SAS0HR: 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.451 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.452 10:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.019 nvme0n1 00:23:06.019 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.019 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.019 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.019 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.019 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM2NGNhMGE0NmJiZjAzMTg2ZTEyODc2OWI2NjU1NzE1ZDU1YzRmMWRiZjFhOTkyf3/+Kw==: 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: ]] 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRjZmM2Mzg2M2U2ZGYzNWQ3ZDM1ZjczNGYxMjBiYmMCTvUw: 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:06.278 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.279 10:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.213 nvme0n1 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.213 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE3NTcxZGE5ZDA0YzQyNTVhYjdiMTY3NWI3MjU1ZTBkOTBjNzk4M2Q0ZTZhYmZlYmRhNTY2YjEyM2UwOGM1MahYYrY=: 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.214 10:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.151 nvme0n1 00:23:08.151 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEzZGMwNDg3YmZlZmM5ZjkxZjRmN2U0MWYzZWUzYjM1MTJhMzk0OTQ2NWQ1NDJlbrk7aw==: 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFkODJjZTE4NzYwZjk3NGQ0NmZjM2Y2ZjU5ZTc5NGVjOTk2ZTY5NGQ2MzEyMWM0znSnQg==: 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 request: 00:23:08.152 { 00:23:08.152 "name": "nvme0", 00:23:08.152 "trtype": "tcp", 00:23:08.152 "traddr": "10.0.0.1", 00:23:08.152 "adrfam": "ipv4", 00:23:08.152 "trsvcid": "4420", 00:23:08.152 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:08.152 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:08.152 "prchk_reftag": false, 00:23:08.152 "prchk_guard": false, 00:23:08.152 "hdgst": false, 00:23:08.152 "ddgst": false, 00:23:08.152 "method": "bdev_nvme_attach_controller", 00:23:08.152 "req_id": 1 00:23:08.152 } 00:23:08.152 Got JSON-RPC error response 00:23:08.152 response: 00:23:08.152 { 00:23:08.152 "code": -5, 00:23:08.152 "message": "Input/output error" 00:23:08.152 } 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 request: 00:23:08.152 { 00:23:08.152 "name": "nvme0", 00:23:08.152 "trtype": "tcp", 00:23:08.152 "traddr": "10.0.0.1", 00:23:08.152 "adrfam": "ipv4", 00:23:08.152 "trsvcid": "4420", 00:23:08.152 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:08.152 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:08.152 "prchk_reftag": false, 00:23:08.152 "prchk_guard": false, 00:23:08.152 "hdgst": false, 00:23:08.152 "ddgst": false, 00:23:08.152 "dhchap_key": "key2", 00:23:08.152 "method": "bdev_nvme_attach_controller", 00:23:08.152 "req_id": 1 00:23:08.152 } 00:23:08.152 Got JSON-RPC error response 00:23:08.152 response: 00:23:08.152 { 00:23:08.152 "code": -5, 00:23:08.152 "message": "Input/output error" 00:23:08.152 } 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:08.152 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.153 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.413 request: 00:23:08.413 { 00:23:08.413 "name": "nvme0", 00:23:08.413 "trtype": "tcp", 00:23:08.413 "traddr": "10.0.0.1", 00:23:08.413 "adrfam": "ipv4", 00:23:08.413 "trsvcid": "4420", 00:23:08.413 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:08.413 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:08.413 "prchk_reftag": false, 00:23:08.413 "prchk_guard": false, 00:23:08.413 "hdgst": false, 00:23:08.413 "ddgst": false, 00:23:08.413 "dhchap_key": "key1", 00:23:08.413 "dhchap_ctrlr_key": "ckey2", 00:23:08.413 "method": "bdev_nvme_attach_controller", 00:23:08.413 "req_id": 1 00:23:08.413 } 00:23:08.413 Got JSON-RPC error response 00:23:08.413 response: 00:23:08.413 { 00:23:08.413 "code": -5, 00:23:08.413 "message": "Input/output error" 00:23:08.413 } 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.413 rmmod nvme_tcp 00:23:08.413 rmmod nvme_fabrics 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1285478 ']' 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1285478 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1285478 ']' 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1285478 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1285478 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1285478' 00:23:08.413 killing process with pid 1285478 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1285478 00:23:08.413 10:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1285478 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.674 10:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.574 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:10.832 10:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:12.207 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:12.207 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:12.207 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:13.139 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:13.396 10:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pn8 /tmp/spdk.key-null.oU1 /tmp/spdk.key-sha256.Dg9 /tmp/spdk.key-sha384.Y4P /tmp/spdk.key-sha512.t72 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:13.396 10:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:14.379 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:14.379 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:14.379 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:14.379 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:14.379 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:14.379 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:14.379 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:14.379 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:14.379 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:14.379 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:14.379 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:14.379 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:14.379 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:14.379 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:14.379 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:14.379 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:14.379 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:14.644 00:23:14.644 real 0m46.717s 00:23:14.644 user 0m43.810s 00:23:14.644 sys 0m5.683s 00:23:14.644 10:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.644 10:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 ************************************ 00:23:14.644 END TEST nvmf_auth_host 00:23:14.644 ************************************ 00:23:14.644 10:40:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:14.644 10:40:03 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:14.644 10:40:03 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:14.644 10:40:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:14.644 10:40:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.644 10:40:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 ************************************ 00:23:14.644 START TEST nvmf_digest 00:23:14.644 ************************************ 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:14.644 * Looking for test storage... 00:23:14.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.644 10:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:17.176 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:17.176 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.176 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:17.177 Found net devices under 0000:09:00.0: cvl_0_0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:17.177 Found net devices under 0000:09:00.1: cvl_0_1 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:23:17.177 00:23:17.177 --- 10.0.0.2 ping statistics --- 00:23:17.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.177 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:23:17.177 00:23:17.177 --- 10.0.0.1 ping statistics --- 00:23:17.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.177 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:17.177 ************************************ 00:23:17.177 START TEST nvmf_digest_clean 00:23:17.177 ************************************ 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1294536 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1294536 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1294536 ']' 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:17.177 [2024-07-15 10:40:05.358562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:17.177 [2024-07-15 10:40:05.358637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.177 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.177 [2024-07-15 10:40:05.423494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.177 [2024-07-15 10:40:05.529924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.177 [2024-07-15 10:40:05.529975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.177 [2024-07-15 10:40:05.529989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.177 [2024-07-15 10:40:05.530006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.177 [2024-07-15 10:40:05.530017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.177 [2024-07-15 10:40:05.530042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:17.177 null0 00:23:17.177 [2024-07-15 10:40:05.688548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.177 [2024-07-15 10:40:05.712750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:17.177 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1294560 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1294560 /var/tmp/bperf.sock 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1294560 ']' 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.178 10:40:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:17.435 [2024-07-15 10:40:05.756798] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:17.435 [2024-07-15 10:40:05.756887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294560 ] 00:23:17.435 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.435 [2024-07-15 10:40:05.814398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.435 [2024-07-15 10:40:05.920989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.692 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.692 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:17.692 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:17.692 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:17.692 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:17.950 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.950 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:18.207 nvme0n1 00:23:18.207 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:18.207 10:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:18.464 Running I/O for 2 seconds... 00:23:20.360 00:23:20.360 Latency(us) 00:23:20.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.360 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:20.360 nvme0n1 : 2.00 19852.96 77.55 0.00 0.00 6439.13 3131.16 18058.81 00:23:20.360 =================================================================================================================== 00:23:20.360 Total : 19852.96 77.55 0.00 0.00 6439.13 3131.16 18058.81 00:23:20.360 0 00:23:20.360 10:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:20.360 10:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:20.360 10:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:20.360 10:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:20.360 | select(.opcode=="crc32c") 00:23:20.360 | "\(.module_name) \(.executed)"' 00:23:20.360 10:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1294560 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1294560 ']' 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1294560 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1294560 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1294560' 00:23:20.618 killing process with pid 1294560 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1294560 00:23:20.618 Received shutdown signal, test time was about 2.000000 seconds 00:23:20.618 00:23:20.618 Latency(us) 00:23:20.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.618 =================================================================================================================== 00:23:20.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.618 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1294560 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1294970 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:20.875 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1294970 /var/tmp/bperf.sock 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1294970 ']' 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:21.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:21.133 [2024-07-15 10:40:09.470576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:21.133 [2024-07-15 10:40:09.470663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294970 ] 00:23:21.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:21.133 Zero copy mechanism will not be used. 00:23:21.133 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.133 [2024-07-15 10:40:09.534322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.133 [2024-07-15 10:40:09.641731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:21.133 10:40:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:21.699 10:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.699 10:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.955 nvme0n1 00:23:21.955 10:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:21.955 10:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:22.212 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:22.212 Zero copy mechanism will not be used. 00:23:22.212 Running I/O for 2 seconds... 00:23:24.122 00:23:24.123 Latency(us) 00:23:24.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:24.123 nvme0n1 : 2.00 5554.42 694.30 0.00 0.00 2876.48 691.77 5048.70 00:23:24.123 =================================================================================================================== 00:23:24.123 Total : 5554.42 694.30 0.00 0.00 2876.48 691.77 5048.70 00:23:24.123 0 00:23:24.123 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:24.123 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:24.123 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:24.123 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:24.123 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:24.123 | select(.opcode=="crc32c") 00:23:24.123 | "\(.module_name) \(.executed)"' 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1294970 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1294970 ']' 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1294970 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1294970 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1294970' 00:23:24.380 killing process with pid 1294970 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1294970 00:23:24.380 Received shutdown signal, test time was about 2.000000 seconds 00:23:24.380 00:23:24.380 Latency(us) 00:23:24.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.380 =================================================================================================================== 00:23:24.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.380 10:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1294970 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1295497 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1295497 /var/tmp/bperf.sock 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1295497 ']' 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:24.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.637 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:24.637 [2024-07-15 10:40:13.150344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:24.637 [2024-07-15 10:40:13.150440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295497 ] 00:23:24.637 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.894 [2024-07-15 10:40:13.208973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.894 [2024-07-15 10:40:13.312217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.894 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.894 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:24.894 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:24.894 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:24.894 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:25.150 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.150 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.714 nvme0n1 00:23:25.714 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:25.714 10:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:25.714 Running I/O for 2 seconds... 00:23:27.610 00:23:27.610 Latency(us) 00:23:27.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.610 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:27.610 nvme0n1 : 2.01 21834.83 85.29 0.00 0.00 5849.03 2669.99 9272.13 00:23:27.610 =================================================================================================================== 00:23:27.610 Total : 21834.83 85.29 0.00 0.00 5849.03 2669.99 9272.13 00:23:27.610 0 00:23:27.610 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:27.610 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:27.610 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:27.610 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:27.610 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:27.610 | select(.opcode=="crc32c") 00:23:27.610 | "\(.module_name) \(.executed)"' 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1295497 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1295497 ']' 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1295497 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1295497 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1295497' 00:23:27.868 killing process with pid 1295497 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1295497 00:23:27.868 Received shutdown signal, test time was about 2.000000 seconds 00:23:27.868 00:23:27.868 Latency(us) 00:23:27.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.868 =================================================================================================================== 00:23:27.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.868 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1295497 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1295901 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1295901 /var/tmp/bperf.sock 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1295901 ']' 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:28.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.126 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:28.384 [2024-07-15 10:40:16.678397] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:28.384 [2024-07-15 10:40:16.678490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295901 ] 00:23:28.384 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:28.384 Zero copy mechanism will not be used. 00:23:28.384 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.384 [2024-07-15 10:40:16.736734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.384 [2024-07-15 10:40:16.839833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.384 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.384 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:28.384 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:28.384 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:28.384 10:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:28.950 10:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:28.950 10:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:29.208 nvme0n1 00:23:29.208 10:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:29.208 10:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:29.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:29.465 Zero copy mechanism will not be used. 00:23:29.465 Running I/O for 2 seconds... 00:23:31.363 00:23:31.363 Latency(us) 00:23:31.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.363 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:31.363 nvme0n1 : 2.00 6504.65 813.08 0.00 0.00 2448.78 1735.49 4903.06 00:23:31.363 =================================================================================================================== 00:23:31.363 Total : 6504.65 813.08 0.00 0.00 2448.78 1735.49 4903.06 00:23:31.363 0 00:23:31.363 10:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:31.363 10:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:31.363 10:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:31.363 10:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:31.363 | select(.opcode=="crc32c") 00:23:31.363 | "\(.module_name) \(.executed)"' 00:23:31.363 10:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1295901 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1295901 ']' 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1295901 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1295901 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1295901' 00:23:31.620 killing process with pid 1295901 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1295901 00:23:31.620 Received shutdown signal, test time was about 2.000000 seconds 00:23:31.620 00:23:31.620 Latency(us) 00:23:31.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.620 =================================================================================================================== 00:23:31.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.620 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1295901 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1294536 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1294536 ']' 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1294536 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1294536 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1294536' 00:23:31.876 killing process with pid 1294536 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1294536 00:23:31.876 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1294536 00:23:32.132 00:23:32.132 real 0m15.351s 00:23:32.133 user 0m29.324s 00:23:32.133 sys 0m4.535s 00:23:32.133 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.133 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:32.133 ************************************ 00:23:32.133 END TEST nvmf_digest_clean 00:23:32.133 ************************************ 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:32.389 ************************************ 00:23:32.389 START TEST nvmf_digest_error 00:23:32.389 ************************************ 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1296341 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1296341 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1296341 ']' 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.389 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:32.389 [2024-07-15 10:40:20.770837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:32.389 [2024-07-15 10:40:20.770938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.389 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.389 [2024-07-15 10:40:20.833367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.389 [2024-07-15 10:40:20.932676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.389 [2024-07-15 10:40:20.932732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.389 [2024-07-15 10:40:20.932756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.389 [2024-07-15 10:40:20.932767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.389 [2024-07-15 10:40:20.932777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.389 [2024-07-15 10:40:20.932821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.646 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.646 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:32.646 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.646 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.646 10:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:32.646 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.646 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:32.646 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.646 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:32.646 [2024-07-15 10:40:21.009390] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:32.647 null0 00:23:32.647 [2024-07-15 10:40:21.121351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.647 [2024-07-15 10:40:21.145554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1296482 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1296482 /var/tmp/bperf.sock 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1296482 ']' 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:32.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.647 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:32.647 [2024-07-15 10:40:21.190225] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:32.647 [2024-07-15 10:40:21.190307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296482 ] 00:23:32.904 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.904 [2024-07-15 10:40:21.248154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.904 [2024-07-15 10:40:21.357537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.161 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.161 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:33.161 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:33.161 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:33.417 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:33.417 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.417 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:33.417 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.417 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:33.417 10:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:33.674 nvme0n1 00:23:33.674 10:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:33.674 10:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.674 10:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:33.674 10:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.674 10:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:33.674 10:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:33.674 Running I/O for 2 seconds... 00:23:33.674 [2024-07-15 10:40:22.181148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.674 [2024-07-15 10:40:22.181194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-07-15 10:40:22.181229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.674 [2024-07-15 10:40:22.196241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.674 [2024-07-15 10:40:22.196279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-07-15 10:40:22.196296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.674 [2024-07-15 10:40:22.208059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.674 [2024-07-15 10:40:22.208091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-07-15 10:40:22.208123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.674 [2024-07-15 10:40:22.219850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.674 [2024-07-15 10:40:22.219879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-07-15 10:40:22.219896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.234333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.234366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.234390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.245420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.245464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.245481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.261541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.261569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.261600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.272218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.272246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.272261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.284962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.284993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.285012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.297760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.297809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.297829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.309901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.309931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.309948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.321889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.321918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.321934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.335177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.335208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.335224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.345581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.345614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.345631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.358604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.358631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.358647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.370682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.370713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.370730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.383383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.383414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.383446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.396577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.396607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.396638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.408987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.409016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.409033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.421952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.421982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.421999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.931 [2024-07-15 10:40:22.434628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.931 [2024-07-15 10:40:22.434659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.931 [2024-07-15 10:40:22.434677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.932 [2024-07-15 10:40:22.448144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.932 [2024-07-15 10:40:22.448192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.932 [2024-07-15 10:40:22.448209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.932 [2024-07-15 10:40:22.461498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.932 [2024-07-15 10:40:22.461530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.932 [2024-07-15 10:40:22.461548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.932 [2024-07-15 10:40:22.472023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:33.932 [2024-07-15 10:40:22.472052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.932 [2024-07-15 10:40:22.472068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.485954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.188 [2024-07-15 10:40:22.485984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.188 [2024-07-15 10:40:22.486002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.502159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.188 [2024-07-15 10:40:22.502190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.188 [2024-07-15 10:40:22.502207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.518940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.188 [2024-07-15 10:40:22.518970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.188 [2024-07-15 10:40:22.519003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.532149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.188 [2024-07-15 10:40:22.532180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.188 [2024-07-15 10:40:22.532197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.543758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.188 [2024-07-15 10:40:22.543810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.188 [2024-07-15 10:40:22.543829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.557735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.188 [2024-07-15 10:40:22.557765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.188 [2024-07-15 10:40:22.557782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.188 [2024-07-15 10:40:22.572475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.572504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.572540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.584444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.584472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.584487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.597546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.597575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.597591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.610625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.610653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.610669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.623073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.623120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.623137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.634911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.634940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.634956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.645467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.645494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.645509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.659971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.660003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.660021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.674202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.674231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.674247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.684264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.684296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.684312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.699103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.699133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.699150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.711901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.711945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.711962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.189 [2024-07-15 10:40:22.725065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.189 [2024-07-15 10:40:22.725112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.189 [2024-07-15 10:40:22.725128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.740668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.740700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.740720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.754478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.754511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.765662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.765690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.765706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.781298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.781330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.781348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.792152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.792197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.792214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.805671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.805721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.805738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.820539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.820570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.820587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.832114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.832143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-07-15 10:40:22.832175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.446 [2024-07-15 10:40:22.847384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.446 [2024-07-15 10:40:22.847414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.847446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.861834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.861863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.861880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.873199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.873227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.873243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.885755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.885783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.885821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.898928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.898959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.898976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.910396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.910440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.910460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.924601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.924633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.924650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.936991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.937023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.937041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.949194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.949224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.949241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.961560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.961590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.961607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.973969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.974016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.974033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.447 [2024-07-15 10:40:22.986199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.447 [2024-07-15 10:40:22.986227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-07-15 10:40:22.986242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:22.998576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:22.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:22.998627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.011989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.012055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.024315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.024359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.024375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.036162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.036192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.036224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.049167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.049209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.049226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.061665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.061696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.061728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.073984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.074014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.074031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.085605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.085633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.085649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.098719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.098746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.098762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.113010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.705 [2024-07-15 10:40:23.113041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.705 [2024-07-15 10:40:23.113058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.705 [2024-07-15 10:40:23.126339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.126369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.126394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.138550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.138596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.138614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.150307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.150336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.162879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.162908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.162924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.176155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.176200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.176216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.188816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.188846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.188864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.199450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.199496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.199513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.213524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.213551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.213567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.226221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.226249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.226264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.238817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.238854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.238872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.706 [2024-07-15 10:40:23.250033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.706 [2024-07-15 10:40:23.250062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.706 [2024-07-15 10:40:23.250093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.263184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.263217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.263235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.277475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.277507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.277525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.289890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.289921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.301265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.301296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.301313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.313487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.313517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.313549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.326939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.326968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.326984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.339539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.339566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.339582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.352636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.352681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.352698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.363831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.363874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.363890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.375964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.375994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.389487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.389516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.389532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.403694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.403721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.417548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.963 [2024-07-15 10:40:23.417577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-07-15 10:40:23.417594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.963 [2024-07-15 10:40:23.429597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.429625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.429640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.964 [2024-07-15 10:40:23.444766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.444798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.444824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.964 [2024-07-15 10:40:23.455998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.456028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.456050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.964 [2024-07-15 10:40:23.470381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.470410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.470426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.964 [2024-07-15 10:40:23.481736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.481763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.481779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.964 [2024-07-15 10:40:23.496450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.496477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.496492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.964 [2024-07-15 10:40:23.510976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:34.964 [2024-07-15 10:40:23.511009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-07-15 10:40:23.511028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.525501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.525533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.525566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.536226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.536259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.536276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.549188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.549218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.549234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.562481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.562511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.562527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.575598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.575630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.575647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.590067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.590110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.590128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.606024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.606056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.606074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.620382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.620412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.620429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.631975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.632006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.632024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.647601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.647631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.647647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.661665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.661710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.661728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.673048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.673076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.673092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.687321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.687383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.702008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.702036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.702066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.714359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.714386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.714402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.726946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.726976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.221 [2024-07-15 10:40:23.726994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.221 [2024-07-15 10:40:23.741044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.221 [2024-07-15 10:40:23.741075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-07-15 10:40:23.741106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.222 [2024-07-15 10:40:23.752251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.222 [2024-07-15 10:40:23.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-07-15 10:40:23.752294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.222 [2024-07-15 10:40:23.765251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.222 [2024-07-15 10:40:23.765281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.222 [2024-07-15 10:40:23.765297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.778075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.778106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.778137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.794032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.794061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.808578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.808632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.825572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.825599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.825615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.835885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.835915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.835947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.851246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.851275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.851291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.863844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.863889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.875666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.875711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.875729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.888201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.888246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.888263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.901119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.901149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.901166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.913779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.913831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.913852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.925070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.925115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.925131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.938616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.938643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.938662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.951756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.951785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.951806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.964606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.964635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.964668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.977465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.977495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.977514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:23.989816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:23.989847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:23.989864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:24.001945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:24.001976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:24.001994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:24.014486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:24.014531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:24.014549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-07-15 10:40:24.026853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.479 [2024-07-15 10:40:24.026895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-07-15 10:40:24.026941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.039401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.039433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.039450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.053340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.053386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.053404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.066100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.066131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.066169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.078138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.078169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.090830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.090881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.090898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.104458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.104489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.104506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.115769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.115819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.115836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.128145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.128173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.128188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.140552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.140584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.140601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.154015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.736 [2024-07-15 10:40:24.154045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.736 [2024-07-15 10:40:24.154063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.736 [2024-07-15 10:40:24.166078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb74d50) 00:23:35.737 [2024-07-15 10:40:24.166125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.737 [2024-07-15 10:40:24.166143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.737 00:23:35.737 Latency(us) 00:23:35.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.737 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:35.737 nvme0n1 : 2.00 19650.66 76.76 0.00 0.00 6505.75 3470.98 21165.70 00:23:35.737 =================================================================================================================== 00:23:35.737 Total : 19650.66 76.76 0.00 0.00 6505.75 3470.98 21165.70 00:23:35.737 0 00:23:35.737 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:35.737 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:35.737 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:35.737 | .driver_specific 00:23:35.737 | .nvme_error 00:23:35.737 | .status_code 00:23:35.737 | .command_transient_transport_error' 00:23:35.737 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1296482 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1296482 ']' 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1296482 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296482 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296482' 00:23:35.994 killing process with pid 1296482 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1296482 00:23:35.994 Received shutdown signal, test time was about 2.000000 seconds 00:23:35.994 00:23:35.994 Latency(us) 00:23:35.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.994 =================================================================================================================== 00:23:35.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.994 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1296482 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1296900 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1296900 /var/tmp/bperf.sock 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1296900 ']' 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:36.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.251 10:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:36.251 [2024-07-15 10:40:24.789421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:36.251 [2024-07-15 10:40:24.789519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296900 ] 00:23:36.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:36.251 Zero copy mechanism will not be used. 00:23:36.508 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.508 [2024-07-15 10:40:24.846874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.508 [2024-07-15 10:40:24.949550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.765 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.765 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:36.765 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:36.766 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:37.341 nvme0n1 00:23:37.341 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:37.341 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.341 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:37.341 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.341 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:37.341 10:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:37.341 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:37.341 Zero copy mechanism will not be used. 00:23:37.341 Running I/O for 2 seconds... 00:23:37.341 [2024-07-15 10:40:25.869933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.341 [2024-07-15 10:40:25.869995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.341 [2024-07-15 10:40:25.870016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.341 [2024-07-15 10:40:25.876615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.341 [2024-07-15 10:40:25.876662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.341 [2024-07-15 10:40:25.876701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.884468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.884502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.884541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.892588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.892622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.892649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.900518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.900553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.900571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.906745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.906777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.906819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.911508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.911539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.911556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.917040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.917073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.917117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.922242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.922289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.922306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.927172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.927205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.927223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.932508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.932540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.932558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.938277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.938324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.938342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.943832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.943863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.943881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.949141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.949173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.949191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.955247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.955279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.955298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.962068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.970594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.970627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.970645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.977927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.977959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.977978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.986015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.986047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.986080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:25.994460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:25.994493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:25.994511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:26.002213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:26.002248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:26.002268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:26.008637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:26.008670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:26.008688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:26.014577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:26.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:26.014629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:26.020485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:26.020517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.650 [2024-07-15 10:40:26.020536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.650 [2024-07-15 10:40:26.026181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.650 [2024-07-15 10:40:26.026214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.026232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.032105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.032144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.032163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.037706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.037756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.043494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.043527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.043545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.048941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.048973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.048991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.054158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.054189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.054207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.060095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.060127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.060145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.065807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.065839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.065857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.071615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.071648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.071671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.076951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.076983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.081011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.081043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.081061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.086022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.086068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.086085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.091931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.091977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.091995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.097950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.097982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.098000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.103608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.103640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.103658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.109303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.109333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.109350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.115257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.115289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.115322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.122084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.122120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.122138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.127854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.127884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.127901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.133550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.133580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.133597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.139337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.139368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.139386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.145227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.145257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.145290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.150887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.150917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.150934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.156506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.156536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.156553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.162278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.162308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.162326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.168096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.168126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.168143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.173404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.173433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.173450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.179587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.179633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.179652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.185181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.185227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.651 [2024-07-15 10:40:26.185247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.651 [2024-07-15 10:40:26.190698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.651 [2024-07-15 10:40:26.190730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.652 [2024-07-15 10:40:26.190749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.196775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.196854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.196889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.203223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.203321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.210164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.210197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.210232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.217718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.217749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.217783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.225399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.225430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.225454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.233032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.233062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.233100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.240655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.240685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.240718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.248365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.248395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.248429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.255962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.255993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.256012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.263515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.263545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.263577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.271121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.271150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.271169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.278756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.278797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.278823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.286233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.286276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.286293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.293827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.293858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.293876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.301446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.301478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.301496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.309089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.309120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.309152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.316783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.316827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.316846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.923 [2024-07-15 10:40:26.324413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.923 [2024-07-15 10:40:26.324457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.923 [2024-07-15 10:40:26.324475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.330519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.330564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.330582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.335938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.335991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.341994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.342024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.342042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.348190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.348220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.348244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.353239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.353283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.353301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.359398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.359429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.359447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.364765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.364796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.364823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.369454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.369484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.369502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.374279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.374308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.374326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.379074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.379118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.379136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.383967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.383996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.384015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.389067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.389099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.389117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.394438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.394489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.394508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.400455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.400500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.400518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.405903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.405934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.405952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.411139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.411169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.411188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.416267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.416298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.416316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.421871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.421901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.421919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.427809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.427840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.427858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.433781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.433820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.433839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.437417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.437447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.437465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.924 [2024-07-15 10:40:26.444008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.924 [2024-07-15 10:40:26.444036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.924 [2024-07-15 10:40:26.444054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.925 [2024-07-15 10:40:26.450530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.925 [2024-07-15 10:40:26.450560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.925 [2024-07-15 10:40:26.450597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.925 [2024-07-15 10:40:26.456522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.925 [2024-07-15 10:40:26.456552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.925 [2024-07-15 10:40:26.456570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.925 [2024-07-15 10:40:26.462567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.925 [2024-07-15 10:40:26.462612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.925 [2024-07-15 10:40:26.462629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.925 [2024-07-15 10:40:26.468809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:37.925 [2024-07-15 10:40:26.468839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.925 [2024-07-15 10:40:26.468871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.475278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.475308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.475341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.481745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.481790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.481819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.487986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.488017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.491815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.491845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.491869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.497797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.497836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.497855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.503479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.503509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.503528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.509092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.509148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.509177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.514820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.514857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.514876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.520387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.520419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.520437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.526229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.526259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.526278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.532116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.532159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.532178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.537789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.537831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.537849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.543340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.543375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.543408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.548997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.549028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.549046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.554552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.554597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.554616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.560178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.560226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.565151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.565180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.184 [2024-07-15 10:40:26.565198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.184 [2024-07-15 10:40:26.570152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.184 [2024-07-15 10:40:26.570182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.575692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.575737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.575755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.581659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.581688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.581705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.587519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.587548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.587566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.592496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.592525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.592542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.597401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.597429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.597447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.602467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.602495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.602512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.606861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.606893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.611319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.611350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.611366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.615945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.615974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.615992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.620574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.620604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.620621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.625246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.625275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.625293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.630231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.630263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.630286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.635602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.635634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.635652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.641522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.641554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.641574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.647223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.647254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.647272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.652188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.652220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.652238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.657744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.657775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.657793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.662524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.662555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.662572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.668296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.668328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.668346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.673906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.673937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.673955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.680327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.680358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.680376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.687972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.688003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.688021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.693656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.693687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.693704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.700118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.700150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.700168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.706683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.706715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.706733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.712685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.712717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.712735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.718204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.718236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.718253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.723568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.185 [2024-07-15 10:40:26.723599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.185 [2024-07-15 10:40:26.723618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.185 [2024-07-15 10:40:26.729445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.186 [2024-07-15 10:40:26.729478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.186 [2024-07-15 10:40:26.729503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.445 [2024-07-15 10:40:26.737074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.445 [2024-07-15 10:40:26.737108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.445 [2024-07-15 10:40:26.737128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.445 [2024-07-15 10:40:26.743902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.445 [2024-07-15 10:40:26.743935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.445 [2024-07-15 10:40:26.743953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.445 [2024-07-15 10:40:26.750901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.445 [2024-07-15 10:40:26.750933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.445 [2024-07-15 10:40:26.750951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.445 [2024-07-15 10:40:26.754999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.445 [2024-07-15 10:40:26.755029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.445 [2024-07-15 10:40:26.755048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.445 [2024-07-15 10:40:26.760956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.445 [2024-07-15 10:40:26.760988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.445 [2024-07-15 10:40:26.761006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.445 [2024-07-15 10:40:26.767276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.767307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.767324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.774937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.774985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.775003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.780670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.780702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.780720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.786503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.786539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.786557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.791679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.791710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.791727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.796365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.796395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.796414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.802014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.802045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.802063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.808447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.808479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.808511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.813620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.813652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.813670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.819130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.819161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.819179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.824661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.824693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.824711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.830294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.830325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.830344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.836526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.836558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.836576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.841910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.841942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.841960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.847311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.847344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.847363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.852920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.852953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.852972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.858392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.858423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.858441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.864034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.864067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.864085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.869694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.869725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.869743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.875244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.875276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.875294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.880518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.880550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.880574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.885173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.885204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.885222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.891056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.891087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.891105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.896123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.896154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.896172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.901537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.901569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.901587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.907739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.907771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.907792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.915159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.915191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.915212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.922866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.922900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.922919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.446 [2024-07-15 10:40:26.929619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.446 [2024-07-15 10:40:26.929651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.446 [2024-07-15 10:40:26.929670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.935831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.935877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.935896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.941392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.941424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.946925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.946956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.946975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.952068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.952099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.952127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.954987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.955018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.955036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.959546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.959578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.959596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.964170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.964201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.964218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.967684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.967714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.967732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.971634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.971664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.971687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.976608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.976656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.981821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.981850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.981867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.987046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.987076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.987093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.447 [2024-07-15 10:40:26.992450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.447 [2024-07-15 10:40:26.992482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.447 [2024-07-15 10:40:26.992501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:26.997787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:26.997827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:26.997846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.003131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.003164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.003183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.008475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.008509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.008526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.013718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.013749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.013782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.019116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.019179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.019197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.024416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.024463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.024480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.029876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.029906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.029924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.035240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.035289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.040557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.040603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.706 [2024-07-15 10:40:27.040620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.706 [2024-07-15 10:40:27.046041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.706 [2024-07-15 10:40:27.046072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.046090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.051292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.051323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.051340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.056355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.056386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.056403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.061600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.061630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.061648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.066889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.066918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.066935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.072075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.072105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.072123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.077198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.077228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.077246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.082556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.082586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.082603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.088117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.088163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.088180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.093538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.093569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.093586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.098853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.098883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.098901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.104223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.104252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.104269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.109627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.109658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.109681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.113774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.113811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.113830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.116859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.116889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.116906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.121994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.122024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.122041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.127329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.127374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.127391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.132678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.132721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.132738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.138014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.138044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.138061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.143256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.143301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.143319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.149570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.149617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.149634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.156762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.156799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.156829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.163463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.163493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.163511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.169727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.169757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.169774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.176207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.176238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.176256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.182066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.182098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.182115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.187736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.187767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.187784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.194297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.194330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.194348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.201913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.201945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.707 [2024-07-15 10:40:27.201963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.707 [2024-07-15 10:40:27.207503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.707 [2024-07-15 10:40:27.207534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.207552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.211383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.211413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.211431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.217330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.217361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.217379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.223519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.223548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.223565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.231193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.231243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.231260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.237616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.237663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.237680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.243935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.243967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.243985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.708 [2024-07-15 10:40:27.250146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.708 [2024-07-15 10:40:27.250178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.708 [2024-07-15 10:40:27.250197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.967 [2024-07-15 10:40:27.256241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.967 [2024-07-15 10:40:27.256277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.967 [2024-07-15 10:40:27.256297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.967 [2024-07-15 10:40:27.261888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.967 [2024-07-15 10:40:27.261923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.967 [2024-07-15 10:40:27.261947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.967 [2024-07-15 10:40:27.267179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.967 [2024-07-15 10:40:27.267211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.267229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.272608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.272640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.272658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.277877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.277909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.277927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.283306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.283337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.283355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.288638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.288669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.288687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.294157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.294189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.294207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.299611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.299643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.299675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.305045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.305077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.305094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.310706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.310737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.310755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.316107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.316138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.316155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.321464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.321495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.321513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.326593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.326624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.326657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.331729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.331760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.331778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.336916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.336946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.336964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.342080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.342110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.342127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.346703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.346752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.349741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.349771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.349794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.353727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.353757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.353775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.358791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.358829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.358863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.364129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.364159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.364177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.369357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.369387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.369404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.374579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.374640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.379891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.379933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.379951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.385043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.385073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.385090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.390154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.390198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.390214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.395304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.395340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.395358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.400449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.968 [2024-07-15 10:40:27.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.968 [2024-07-15 10:40:27.405674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.968 [2024-07-15 10:40:27.405703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.405720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.411006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.411035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.411053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.416200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.416229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.416246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.421408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.421437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.421454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.426604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.426632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.431942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.431972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.431990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.437369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.437399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.437430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.442578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.442622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.442639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.447731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.447762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.447780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.452891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.452931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.452949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.458127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.458156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.458174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.463384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.463414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.463446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.468593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.468637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.468654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.473922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.473953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.473970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.479048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.479079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.479096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.484259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.484288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.484311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.489687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.489716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.489747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.495160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.495189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.495205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.500327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.500358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.500391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.505437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.505467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.505485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.510663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.510693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.510726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.969 [2024-07-15 10:40:27.515875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:38.969 [2024-07-15 10:40:27.515908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.969 [2024-07-15 10:40:27.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.521082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.521130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.521148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.526321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.526367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.526385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.531670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.531705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.531724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.537143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.537173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.537209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.542374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.542404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.542422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.547571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.547601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.547619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.552435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.552465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.552482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.557538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.557568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.557602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.562664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.562695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.562713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.567864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.567893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.567912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.573240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.573269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.573286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.578609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.578654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.578671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.583997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.229 [2024-07-15 10:40:27.584028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.229 [2024-07-15 10:40:27.584045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.229 [2024-07-15 10:40:27.589325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.589355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.589371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.594796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.594835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.594853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.600015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.600045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.600063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.605077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.605107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.605124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.610192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.610222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.610240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.616060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.616093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.616112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.621853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.621885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.621908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.627252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.627298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.627315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.632736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.632766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.632798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.638096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.638127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.638158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.643484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.643525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.643543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.648854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.648899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.648917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.654170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.654212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.654230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.659366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.659397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.664744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.664774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.664791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.670095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.670125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.670143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.675568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.675598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.675615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.680917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.680947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.680964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.686291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.686320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.686337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.691841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.691887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.691905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.697281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.697310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.697327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.702924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.702971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.702989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.708173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.708203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.708220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.713412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.713442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.713464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.718665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.718694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.718711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.724000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.724031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.724048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.729160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.729204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.729221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.734454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.734484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.734501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.740003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.230 [2024-07-15 10:40:27.740034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.230 [2024-07-15 10:40:27.740051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.230 [2024-07-15 10:40:27.745279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.745308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.745325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.231 [2024-07-15 10:40:27.750513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.750542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.750573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.231 [2024-07-15 10:40:27.756108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.756138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.756155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.231 [2024-07-15 10:40:27.761348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.761384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.761402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.231 [2024-07-15 10:40:27.766535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.766565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.766582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.231 [2024-07-15 10:40:27.771746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.771776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.771795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.231 [2024-07-15 10:40:27.777170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.231 [2024-07-15 10:40:27.777204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.231 [2024-07-15 10:40:27.777223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.782439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.782473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.782506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.787592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.787624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.787657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.792839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.792884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.792902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.798177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.798207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.798224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.803447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.803477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.803494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.808920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.808951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.808969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.814151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.814199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.819330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.819360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.819377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.824448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.824479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.824497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.829878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.829908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.829926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.834768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.834799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.834826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.839944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.839975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.839992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.843543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.843575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.843592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.848060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.848091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.848115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.853634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.853666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.853684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.859152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.859184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.859202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.490 [2024-07-15 10:40:27.864570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae84f0) 00:23:39.490 [2024-07-15 10:40:27.864602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.490 [2024-07-15 10:40:27.864620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.490 00:23:39.490 Latency(us) 00:23:39.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.490 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:39.490 nvme0n1 : 2.00 5498.30 687.29 0.00 0.00 2905.22 570.41 8543.95 00:23:39.490 =================================================================================================================== 00:23:39.490 Total : 5498.30 687.29 0.00 0.00 2905.22 570.41 8543.95 00:23:39.490 0 00:23:39.490 10:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:39.490 10:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:39.490 10:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:39.490 | .driver_specific 00:23:39.490 | .nvme_error 00:23:39.490 | .status_code 00:23:39.490 | .command_transient_transport_error' 00:23:39.490 10:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 355 > 0 )) 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1296900 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1296900 ']' 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1296900 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296900 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296900' 00:23:39.749 killing process with pid 1296900 00:23:39.749 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1296900 00:23:39.749 Received shutdown signal, test time was about 2.000000 seconds 00:23:39.749 00:23:39.750 Latency(us) 00:23:39.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.750 =================================================================================================================== 00:23:39.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.750 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1296900 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1297317 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1297317 /var/tmp/bperf.sock 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1297317 ']' 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:40.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.008 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:40.008 [2024-07-15 10:40:28.478442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:40.008 [2024-07-15 10:40:28.478539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297317 ] 00:23:40.008 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.008 [2024-07-15 10:40:28.537873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.266 [2024-07-15 10:40:28.643485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.266 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.266 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:40.266 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:40.266 10:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:40.524 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:40.524 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.524 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:40.524 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.524 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:40.524 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:41.089 nvme0n1 00:23:41.089 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:41.089 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.089 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.089 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.090 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:41.090 10:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:41.090 Running I/O for 2 seconds... 00:23:41.090 [2024-07-15 10:40:29.534209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f6458 00:23:41.090 [2024-07-15 10:40:29.535225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.535276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.546116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e95a0 00:23:41.090 [2024-07-15 10:40:29.546727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.546758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.558341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fda78 00:23:41.090 [2024-07-15 10:40:29.559093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.559138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.570083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eff18 00:23:41.090 [2024-07-15 10:40:29.571085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.571114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.582269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3060 00:23:41.090 [2024-07-15 10:40:29.583558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.583586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.592948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f57b0 00:23:41.090 [2024-07-15 10:40:29.594133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.594180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.604635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e5ec8 00:23:41.090 [2024-07-15 10:40:29.605649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.605703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.616530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f8618 00:23:41.090 [2024-07-15 10:40:29.617606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.617649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.627563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3498 00:23:41.090 [2024-07-15 10:40:29.628520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.090 [2024-07-15 10:40:29.628565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:41.090 [2024-07-15 10:40:29.639112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f6458 00:23:41.348 [2024-07-15 10:40:29.640073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.348 [2024-07-15 10:40:29.640117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:41.348 [2024-07-15 10:40:29.650987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f31b8 00:23:41.348 [2024-07-15 10:40:29.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.348 [2024-07-15 10:40:29.651588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:41.348 [2024-07-15 10:40:29.663203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f46d0 00:23:41.348 [2024-07-15 10:40:29.663924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.348 [2024-07-15 10:40:29.663955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:41.348 [2024-07-15 10:40:29.674981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190df988 00:23:41.348 [2024-07-15 10:40:29.675941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.348 [2024-07-15 10:40:29.675984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:41.348 [2024-07-15 10:40:29.685836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f9f68 00:23:41.348 [2024-07-15 10:40:29.686663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.348 [2024-07-15 10:40:29.686707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:41.348 [2024-07-15 10:40:29.697259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fa7d8 00:23:41.348 [2024-07-15 10:40:29.698216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.698258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.711357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e5220 00:23:41.349 [2024-07-15 10:40:29.712924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.712969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.723608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0a68 00:23:41.349 [2024-07-15 10:40:29.725312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.725362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.731731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f9f68 00:23:41.349 [2024-07-15 10:40:29.732415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.732458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.746281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fd640 00:23:41.349 [2024-07-15 10:40:29.747906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.747953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.758427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fb480 00:23:41.349 [2024-07-15 10:40:29.760220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.760264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.766569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f2d80 00:23:41.349 [2024-07-15 10:40:29.767375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.767407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.779624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eaab8 00:23:41.349 [2024-07-15 10:40:29.780975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.781019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.791258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f6020 00:23:41.349 [2024-07-15 10:40:29.792457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.792500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.803210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ee190 00:23:41.349 [2024-07-15 10:40:29.804412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.804454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.814735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190edd58 00:23:41.349 [2024-07-15 10:40:29.816067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.816098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.823908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fe720 00:23:41.349 [2024-07-15 10:40:29.824638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.824680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.835748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fbcf0 00:23:41.349 [2024-07-15 10:40:29.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.836542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.850021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ef6a8 00:23:41.349 [2024-07-15 10:40:29.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.851398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.859413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3498 00:23:41.349 [2024-07-15 10:40:29.860221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.860270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.871422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f0bc0 00:23:41.349 [2024-07-15 10:40:29.872517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.872562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.883459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ebfd0 00:23:41.349 [2024-07-15 10:40:29.884537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.884582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.349 [2024-07-15 10:40:29.894590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e9168 00:23:41.349 [2024-07-15 10:40:29.895741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.349 [2024-07-15 10:40:29.895809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.907236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e7818 00:23:41.608 [2024-07-15 10:40:29.908571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.908608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.919664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0ea0 00:23:41.608 [2024-07-15 10:40:29.921048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.921103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.930486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190df118 00:23:41.608 [2024-07-15 10:40:29.931726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.931757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.942048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ed0b0 00:23:41.608 [2024-07-15 10:40:29.943081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.943131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.953993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e99d8 00:23:41.608 [2024-07-15 10:40:29.954997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.965099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fef90 00:23:41.608 [2024-07-15 10:40:29.966768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.966825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.975039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190dfdc0 00:23:41.608 [2024-07-15 10:40:29.975703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.975750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:29.987153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fe2e8 00:23:41.608 [2024-07-15 10:40:29.987974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:29.988018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.001394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0630 00:23:41.608 [2024-07-15 10:40:30.003001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.003036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.013637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f4f40 00:23:41.608 [2024-07-15 10:40:30.015325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.015359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.024300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e27f0 00:23:41.608 [2024-07-15 10:40:30.026028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.026065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.034671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f8a50 00:23:41.608 [2024-07-15 10:40:30.035521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.035549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.047510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7970 00:23:41.608 [2024-07-15 10:40:30.048384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.048436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.062012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e5220 00:23:41.608 [2024-07-15 10:40:30.063314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.063344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.074825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190feb58 00:23:41.608 [2024-07-15 10:40:30.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.076268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.084799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e6fa8 00:23:41.608 [2024-07-15 10:40:30.085494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.608 [2024-07-15 10:40:30.085521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:41.608 [2024-07-15 10:40:30.098932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f6cc8 00:23:41.608 [2024-07-15 10:40:30.100568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.609 [2024-07-15 10:40:30.100595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:41.609 [2024-07-15 10:40:30.110729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0630 00:23:41.609 [2024-07-15 10:40:30.112400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.609 [2024-07-15 10:40:30.112428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:41.609 [2024-07-15 10:40:30.120511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e5220 00:23:41.609 [2024-07-15 10:40:30.121373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.609 [2024-07-15 10:40:30.121401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:41.609 [2024-07-15 10:40:30.133763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e4140 00:23:41.609 [2024-07-15 10:40:30.135312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.609 [2024-07-15 10:40:30.135353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:41.609 [2024-07-15 10:40:30.144955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e4de8 00:23:41.609 [2024-07-15 10:40:30.146425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.609 [2024-07-15 10:40:30.146452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:41.609 [2024-07-15 10:40:30.155817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ee5c8 00:23:41.609 [2024-07-15 10:40:30.156948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.609 [2024-07-15 10:40:30.156979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.167656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ebb98 00:23:41.867 [2024-07-15 10:40:30.168704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.168734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.178539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ef6a8 00:23:41.867 [2024-07-15 10:40:30.180269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.180302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.188419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e7c50 00:23:41.867 [2024-07-15 10:40:30.189232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.189258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.202416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0ea0 00:23:41.867 [2024-07-15 10:40:30.203654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.203682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.213367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f0788 00:23:41.867 [2024-07-15 10:40:30.214589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.214620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.225496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eb328 00:23:41.867 [2024-07-15 10:40:30.226831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.226875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.237257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fc998 00:23:41.867 [2024-07-15 10:40:30.238322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.238350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.248523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fa3a0 00:23:41.867 [2024-07-15 10:40:30.249927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.249958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.260128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e7818 00:23:41.867 [2024-07-15 10:40:30.261206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.261253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.272039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0a68 00:23:41.867 [2024-07-15 10:40:30.273266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.273309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.282938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f92c0 00:23:41.867 [2024-07-15 10:40:30.283999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.284050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.294567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f1868 00:23:41.867 [2024-07-15 10:40:30.295457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.295484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.306742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ee190 00:23:41.867 [2024-07-15 10:40:30.307742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.307769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.317954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f2948 00:23:41.867 [2024-07-15 10:40:30.318944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.318993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.329648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f35f0 00:23:41.867 [2024-07-15 10:40:30.330276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.330304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.344179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190edd58 00:23:41.867 [2024-07-15 10:40:30.345893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.345922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.352315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e38d0 00:23:41.867 [2024-07-15 10:40:30.353178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.353223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.366529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e5ec8 00:23:41.867 [2024-07-15 10:40:30.367809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.867 [2024-07-15 10:40:30.367852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:41.867 [2024-07-15 10:40:30.376225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e4140 00:23:41.867 [2024-07-15 10:40:30.376834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.868 [2024-07-15 10:40:30.376863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:41.868 [2024-07-15 10:40:30.390595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7da8 00:23:41.868 [2024-07-15 10:40:30.392340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.868 [2024-07-15 10:40:30.392367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:41.868 [2024-07-15 10:40:30.398823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e6fa8 00:23:41.868 [2024-07-15 10:40:30.399546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.868 [2024-07-15 10:40:30.399572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:41.868 [2024-07-15 10:40:30.409668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f81e0 00:23:41.868 [2024-07-15 10:40:30.410407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.868 [2024-07-15 10:40:30.410434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:42.126 [2024-07-15 10:40:30.422499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e4de8 00:23:42.126 [2024-07-15 10:40:30.423149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.126 [2024-07-15 10:40:30.423193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:42.126 [2024-07-15 10:40:30.434711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3060 00:23:42.126 [2024-07-15 10:40:30.435495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.126 [2024-07-15 10:40:30.435524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:42.126 [2024-07-15 10:40:30.446760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3d08 00:23:42.126 [2024-07-15 10:40:30.447753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.447796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.457345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fef90 00:23:42.127 [2024-07-15 10:40:30.458514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.458543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.468874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f1868 00:23:42.127 [2024-07-15 10:40:30.469780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.469831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.481077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f8a50 00:23:42.127 [2024-07-15 10:40:30.482085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.482128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.492048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eaab8 00:23:42.127 [2024-07-15 10:40:30.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.492958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.503471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fc998 00:23:42.127 [2024-07-15 10:40:30.504494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.504536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.515332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e7818 00:23:42.127 [2024-07-15 10:40:30.515983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.516018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.527364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ecc78 00:23:42.127 [2024-07-15 10:40:30.528252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.528285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.539459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fc128 00:23:42.127 [2024-07-15 10:40:30.540519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.540548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.550707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fb8b8 00:23:42.127 [2024-07-15 10:40:30.552500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.552535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.560738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fd208 00:23:42.127 [2024-07-15 10:40:30.561494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.561521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.573180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190de8a8 00:23:42.127 [2024-07-15 10:40:30.574088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.574131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.584918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ecc78 00:23:42.127 [2024-07-15 10:40:30.585777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.585827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.598645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fb048 00:23:42.127 [2024-07-15 10:40:30.599998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.600030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.608341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190dece0 00:23:42.127 [2024-07-15 10:40:30.608997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.609026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.622419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7538 00:23:42.127 [2024-07-15 10:40:30.624026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.624054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.634546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f8a50 00:23:42.127 [2024-07-15 10:40:30.636357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.636385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.642838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e73e0 00:23:42.127 [2024-07-15 10:40:30.643579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.643605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.653877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ebfd0 00:23:42.127 [2024-07-15 10:40:30.654631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.654657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.127 [2024-07-15 10:40:30.666169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fc998 00:23:42.127 [2024-07-15 10:40:30.667082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.127 [2024-07-15 10:40:30.667126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.678327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f31b8 00:23:42.386 [2024-07-15 10:40:30.679323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.679361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.690750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ee5c8 00:23:42.386 [2024-07-15 10:40:30.691663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.691707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.702665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e4140 00:23:42.386 [2024-07-15 10:40:30.703522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.713997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f8618 00:23:42.386 [2024-07-15 10:40:30.715193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.715221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.725443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eaef0 00:23:42.386 [2024-07-15 10:40:30.726535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.726562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.737749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ed4e8 00:23:42.386 [2024-07-15 10:40:30.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.739020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.749569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e1b48 00:23:42.386 [2024-07-15 10:40:30.750449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.750481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.761692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f1868 00:23:42.386 [2024-07-15 10:40:30.762718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.762747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.772303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e5658 00:23:42.386 [2024-07-15 10:40:30.773503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.773531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.784066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3d08 00:23:42.386 [2024-07-15 10:40:30.784995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.785042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.796312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eea00 00:23:42.386 [2024-07-15 10:40:30.797378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.797429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.808794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f1ca0 00:23:42.386 [2024-07-15 10:40:30.810024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.810072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.819555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eaef0 00:23:42.386 [2024-07-15 10:40:30.821357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.821396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.831574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e12d8 00:23:42.386 [2024-07-15 10:40:30.833054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.833084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.843274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ef270 00:23:42.386 [2024-07-15 10:40:30.844333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.844390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.856442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eb328 00:23:42.386 [2024-07-15 10:40:30.858054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.858100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.868528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ebfd0 00:23:42.386 [2024-07-15 10:40:30.870323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.386 [2024-07-15 10:40:30.870367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.386 [2024-07-15 10:40:30.876724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e0ea0 00:23:42.387 [2024-07-15 10:40:30.877680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.387 [2024-07-15 10:40:30.877724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:42.387 [2024-07-15 10:40:30.890967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e2c28 00:23:42.387 [2024-07-15 10:40:30.892352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.387 [2024-07-15 10:40:30.892396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.387 [2024-07-15 10:40:30.901972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fef90 00:23:42.387 [2024-07-15 10:40:30.903365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.387 [2024-07-15 10:40:30.903410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:42.387 [2024-07-15 10:40:30.912679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190de470 00:23:42.387 [2024-07-15 10:40:30.914232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.387 [2024-07-15 10:40:30.914260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:42.387 [2024-07-15 10:40:30.924635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7da8 00:23:42.387 [2024-07-15 10:40:30.925943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.387 [2024-07-15 10:40:30.925973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:42.646 [2024-07-15 10:40:30.937355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fe2e8 00:23:42.646 [2024-07-15 10:40:30.938752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.646 [2024-07-15 10:40:30.938811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:42.646 [2024-07-15 10:40:30.946644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f4298 00:23:42.646 [2024-07-15 10:40:30.947575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.646 [2024-07-15 10:40:30.947622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:42.646 [2024-07-15 10:40:30.961499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f5be8 00:23:42.646 [2024-07-15 10:40:30.962820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.646 [2024-07-15 10:40:30.962870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.646 [2024-07-15 10:40:30.972884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190efae0 00:23:42.646 [2024-07-15 10:40:30.974051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.646 [2024-07-15 10:40:30.974110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:42.646 [2024-07-15 10:40:30.984679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f0bc0 00:23:42.646 [2024-07-15 10:40:30.986000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.646 [2024-07-15 10:40:30.986050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:30.997212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fcdd0 00:23:42.647 [2024-07-15 10:40:30.998640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:30.998669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.009738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f4298 00:23:42.647 [2024-07-15 10:40:31.011332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.011364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.022229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fdeb0 00:23:42.647 [2024-07-15 10:40:31.023972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.024018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.030604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f46d0 00:23:42.647 [2024-07-15 10:40:31.031314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.042818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fda78 00:23:42.647 [2024-07-15 10:40:31.043537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.043569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.055373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3d08 00:23:42.647 [2024-07-15 10:40:31.056257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.056289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.067985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fa3a0 00:23:42.647 [2024-07-15 10:40:31.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.069054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.079951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f0ff8 00:23:42.647 [2024-07-15 10:40:31.081095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.081139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.092445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7da8 00:23:42.647 [2024-07-15 10:40:31.093704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.093732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.104839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190de470 00:23:42.647 [2024-07-15 10:40:31.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.106332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.117175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e2c28 00:23:42.647 [2024-07-15 10:40:31.118777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.118829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.128421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ee190 00:23:42.647 [2024-07-15 10:40:31.129557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.129591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.141819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e8d30 00:23:42.647 [2024-07-15 10:40:31.143558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.143601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.150379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e95a0 00:23:42.647 [2024-07-15 10:40:31.151231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.151262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.162980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fda78 00:23:42.647 [2024-07-15 10:40:31.164015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.164044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.175162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e4578 00:23:42.647 [2024-07-15 10:40:31.176167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.176214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:42.647 [2024-07-15 10:40:31.189719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f0ff8 00:23:42.647 [2024-07-15 10:40:31.191575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.647 [2024-07-15 10:40:31.191621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.198430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fdeb0 00:23:42.906 [2024-07-15 10:40:31.199422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.199476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.213129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ebfd0 00:23:42.906 [2024-07-15 10:40:31.214675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.214721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.223948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e6300 00:23:42.906 [2024-07-15 10:40:31.225728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.225777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.234296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fb048 00:23:42.906 [2024-07-15 10:40:31.235073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.235129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.248798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e73e0 00:23:42.906 [2024-07-15 10:40:31.250236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.250283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.261242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f92c0 00:23:42.906 [2024-07-15 10:40:31.262794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.262850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.272495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190df550 00:23:42.906 [2024-07-15 10:40:31.273726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.273756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.286496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e3498 00:23:42.906 [2024-07-15 10:40:31.288361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.288406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.294928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f1ca0 00:23:42.906 [2024-07-15 10:40:31.295910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.295956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.309492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190dece0 00:23:42.906 [2024-07-15 10:40:31.310947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.310982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.321303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7100 00:23:42.906 [2024-07-15 10:40:31.322822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.322868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.333747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fe2e8 00:23:42.906 [2024-07-15 10:40:31.335488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.335539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.345827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e9168 00:23:42.906 [2024-07-15 10:40:31.347518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.347567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.354460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eea00 00:23:42.906 [2024-07-15 10:40:31.355463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.355509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.366597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f0350 00:23:42.906 [2024-07-15 10:40:31.367218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.367248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.381680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fc560 00:23:42.906 [2024-07-15 10:40:31.383495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.906 [2024-07-15 10:40:31.383541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.906 [2024-07-15 10:40:31.390397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190ea680 00:23:42.907 [2024-07-15 10:40:31.391261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.907 [2024-07-15 10:40:31.391305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:42.907 [2024-07-15 10:40:31.402171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f4b08 00:23:42.907 [2024-07-15 10:40:31.403105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.907 [2024-07-15 10:40:31.403158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:42.907 [2024-07-15 10:40:31.414525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190df988 00:23:42.907 [2024-07-15 10:40:31.415619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.907 [2024-07-15 10:40:31.415664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.907 [2024-07-15 10:40:31.426674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f6cc8 00:23:42.907 [2024-07-15 10:40:31.427478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.907 [2024-07-15 10:40:31.427508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:42.907 [2024-07-15 10:40:31.439114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f92c0 00:23:42.907 [2024-07-15 10:40:31.440060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.907 [2024-07-15 10:40:31.440107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:42.907 [2024-07-15 10:40:31.450391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f9b30 00:23:42.907 [2024-07-15 10:40:31.452187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.907 [2024-07-15 10:40:31.452233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:43.164 [2024-07-15 10:40:31.463725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f57b0 00:23:43.165 [2024-07-15 10:40:31.465113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.165 [2024-07-15 10:40:31.465162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:43.165 [2024-07-15 10:40:31.476186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190e38d0 00:23:43.165 [2024-07-15 10:40:31.477776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.165 [2024-07-15 10:40:31.477832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:43.165 [2024-07-15 10:40:31.488587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fcdd0 00:23:43.165 [2024-07-15 10:40:31.490388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.165 [2024-07-15 10:40:31.490435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:43.165 [2024-07-15 10:40:31.497049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190fd208 00:23:43.165 [2024-07-15 10:40:31.497781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.165 [2024-07-15 10:40:31.497819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:43.165 [2024-07-15 10:40:31.508837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190f7538 00:23:43.165 [2024-07-15 10:40:31.509703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.165 [2024-07-15 10:40:31.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:43.165 [2024-07-15 10:40:31.522249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11f96b0) with pdu=0x2000190eee38 00:23:43.165 [2024-07-15 10:40:31.523023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.165 [2024-07-15 10:40:31.523054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.165 00:23:43.165 Latency(us) 00:23:43.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.165 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:43.165 nvme0n1 : 2.01 21701.24 84.77 0.00 0.00 5884.53 2585.03 16019.91 00:23:43.165 =================================================================================================================== 00:23:43.165 Total : 21701.24 84.77 0.00 0.00 5884.53 2585.03 16019.91 00:23:43.165 0 00:23:43.165 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:43.165 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:43.165 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:43.165 | .driver_specific 00:23:43.165 | .nvme_error 00:23:43.165 | .status_code 00:23:43.165 | .command_transient_transport_error' 00:23:43.165 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1297317 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1297317 ']' 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1297317 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297317 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297317' 00:23:43.422 killing process with pid 1297317 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1297317 00:23:43.422 Received shutdown signal, test time was about 2.000000 seconds 00:23:43.422 00:23:43.422 Latency(us) 00:23:43.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.422 =================================================================================================================== 00:23:43.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.422 10:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1297317 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1297732 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1297732 /var/tmp/bperf.sock 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1297732 ']' 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:43.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.680 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.680 [2024-07-15 10:40:32.133985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:43.680 [2024-07-15 10:40:32.134081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297732 ] 00:23:43.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:43.680 Zero copy mechanism will not be used. 00:23:43.680 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.680 [2024-07-15 10:40:32.195049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.937 [2024-07-15 10:40:32.300666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.937 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.937 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:43.937 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:43.937 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:44.194 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:44.194 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.194 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:44.194 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.194 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:44.194 10:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:44.757 nvme0n1 00:23:44.757 10:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:44.757 10:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.757 10:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:44.757 10:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.757 10:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:44.757 10:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:45.016 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:45.016 Zero copy mechanism will not be used. 00:23:45.016 Running I/O for 2 seconds... 00:23:45.016 [2024-07-15 10:40:33.338106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.338488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.338526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.343536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.343866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.343898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.348653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.348958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.348990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.353764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.354109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.354137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.359234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.359544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.359572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.365086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.365403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.365431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.371118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.371434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.371462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.377046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.377362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.382091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.382456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.382485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.387358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.387684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.392464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.392795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.392833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.397598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.397905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.397935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.402620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.402969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.402999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.407827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.408109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.408138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.413287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.413585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.413614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.419636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.419998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.424703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.425094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.425134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.430590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.430904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.430934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.437054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.437437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.442205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.442507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.442540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.447264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.447596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.447624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.452377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.452648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.452675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.457198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.457505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.457533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.462129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.462433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.462460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.467193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.467501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.467528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.472401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.016 [2024-07-15 10:40:33.472704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.016 [2024-07-15 10:40:33.472731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.016 [2024-07-15 10:40:33.477327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.477655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.477682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.482384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.482709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.482737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.487536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.487893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.487922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.492554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.492937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.497429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.497773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.497824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.502442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.502749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.502776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.507491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.507918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.507962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.513729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.514035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.514064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.519861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.520143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.520172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.526506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.526797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.526847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.532860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.533239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.533267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.539112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.539430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.539473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.545124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.545501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.545553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.552472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.552768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.552797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.559404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.017 [2024-07-15 10:40:33.559694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.017 [2024-07-15 10:40:33.559722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.017 [2024-07-15 10:40:33.564547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.564846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.564898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.569260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.569611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.569644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.574020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.574293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.574327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.578623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.578883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.578913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.584678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.584974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.585009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.590671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.590988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.591018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.596617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.596978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.597008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.602862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.603194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.603222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.608517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.608773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.608810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.615437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.615724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.615767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.622053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.622356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.622384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.627531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.627830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.627858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.632944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.633225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.633252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.638376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.638642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.638671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.643711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.643975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.644004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.648998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.649267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.649294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.654325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.654609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.654635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.659677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.276 [2024-07-15 10:40:33.659940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.276 [2024-07-15 10:40:33.659970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.276 [2024-07-15 10:40:33.664821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.665072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.665101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.670054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.670308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.670351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.675730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.675990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.676019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.680939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.681207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.681240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.686333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.686597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.686639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.691258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.691509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.691551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.695902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.696170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.696198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.700459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.700720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.700763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.705347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.705647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.710131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.710410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.710437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.714849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.715161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.715192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.719617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.719917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.719945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.724275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.724564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.728951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.729242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.733769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.734079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.738466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.738742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.738769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.743136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.743471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.747770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.748041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.752511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.752826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.752863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.757162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.757450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.757478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.761651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.761914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.761944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.766235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.766498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.766542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.770848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.771117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.771146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.775396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.775652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.775696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.779882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.780145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.780173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.784607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.784962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.784992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.789213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.789532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.793857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.794109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.277 [2024-07-15 10:40:33.794153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.277 [2024-07-15 10:40:33.798441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.277 [2024-07-15 10:40:33.798708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.278 [2024-07-15 10:40:33.798737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.278 [2024-07-15 10:40:33.803159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.278 [2024-07-15 10:40:33.803434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.278 [2024-07-15 10:40:33.803467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.278 [2024-07-15 10:40:33.807789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.278 [2024-07-15 10:40:33.808065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.278 [2024-07-15 10:40:33.808113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.278 [2024-07-15 10:40:33.812496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.278 [2024-07-15 10:40:33.812773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.278 [2024-07-15 10:40:33.812807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.278 [2024-07-15 10:40:33.817177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.278 [2024-07-15 10:40:33.817453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.278 [2024-07-15 10:40:33.817481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.278 [2024-07-15 10:40:33.821919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.278 [2024-07-15 10:40:33.822198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.278 [2024-07-15 10:40:33.822233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.826921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.827187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.827217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.831717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.832012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.832043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.836689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.836990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.837020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.841536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.841919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.841949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.846253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.846521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.846566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.850821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.851072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.851102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.855349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.855600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.855631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.859851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.860117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.860147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.864388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.864644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.869127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.869387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.869416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.874016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.874285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.874314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.879212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.879499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.879530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.885320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.885626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.885655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.890596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.890902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.890931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.896602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.896893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.896923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.901987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.902264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.902293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.907348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.907642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.907670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.912724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.912986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.913016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.917340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.917607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.917650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.922605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.922912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.537 [2024-07-15 10:40:33.922941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.537 [2024-07-15 10:40:33.928000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.537 [2024-07-15 10:40:33.928290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.928323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.933389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.933668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.933702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.939116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.939384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.939414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.943827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.944079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.944122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.948451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.948702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.948731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.953256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.953508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.953557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.957702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.957964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.957994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.962255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.962518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.962547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.966855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.967121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.967150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.971522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.971847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.971877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.976134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.976429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.976458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.980727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.980985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.981015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.985371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.985673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.985717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.990025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.990340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.990369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.994695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.994967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.994997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:33.999470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:33.999734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:33.999763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.004234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.004542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.004570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.008980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.009245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.009279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.013648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.013909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.013944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.018408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.018670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.018703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.023046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.023307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.023349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.027836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.028101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.028145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.032435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.032698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.032727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.037218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.037513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.037542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.041928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.042213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.042241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.046533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.046865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.046895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.051169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.051426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.051455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.055842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.056113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.056142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.060475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.060727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.060771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.538 [2024-07-15 10:40:34.065146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.538 [2024-07-15 10:40:34.065446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.538 [2024-07-15 10:40:34.065475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.539 [2024-07-15 10:40:34.069930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.539 [2024-07-15 10:40:34.070209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.539 [2024-07-15 10:40:34.070236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.539 [2024-07-15 10:40:34.074774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.539 [2024-07-15 10:40:34.075034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.539 [2024-07-15 10:40:34.075064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.539 [2024-07-15 10:40:34.079779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.539 [2024-07-15 10:40:34.080048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.539 [2024-07-15 10:40:34.080092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.539 [2024-07-15 10:40:34.085701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.086021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.086054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.091779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.092052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.092099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.098675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.098949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.098980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.105308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.105562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.105608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.112162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.112471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.112502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.119074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.119400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.119429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.125181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.125444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.125474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.130395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.130657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.130686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.135170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.135442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.135470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.139775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.140033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.140063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.144724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.145013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.149529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.149843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.149881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.154166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.154445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.154490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.798 [2024-07-15 10:40:34.158707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.798 [2024-07-15 10:40:34.158965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.798 [2024-07-15 10:40:34.158994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.163349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.163628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.163655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.168449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.168713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.168742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.174383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.174712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.174741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.181320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.181629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.181672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.187756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.188032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.188062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.193968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.194250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.194279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.200205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.200510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.200554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.206349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.206616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.206659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.212529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.212878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.212907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.218723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.219012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.219040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.224921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.225238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.225280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.230979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.231266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.237212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.237488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.237516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.243336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.243666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.243695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.249509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.249873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.249901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.255891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.256292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.256320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.262043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.262421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.262450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.268267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.268594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.268622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.274521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.274893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.274936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.280595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.280920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.280948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.286657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.286972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.287002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.292564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.292897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.292940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.299461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.299726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.299755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.305203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.305478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.305512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.311530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.311797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.311832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.317742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.318001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.318032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.323188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.323487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.323515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.327917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.328226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.328254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-15 10:40:34.332565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.799 [2024-07-15 10:40:34.332828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-15 10:40:34.332857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.800 [2024-07-15 10:40:34.337198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.800 [2024-07-15 10:40:34.337479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-15 10:40:34.337507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.800 [2024-07-15 10:40:34.341891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.800 [2024-07-15 10:40:34.342166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-15 10:40:34.342194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.800 [2024-07-15 10:40:34.346445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:45.800 [2024-07-15 10:40:34.346720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-15 10:40:34.346752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.351129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.351401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.351432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.355865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.356131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.356165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.360522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.360808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.360856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.365246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.365520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.365549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.369962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.370250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.370279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.374647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.374917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.374947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.379466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.379735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.379764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.384103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.384370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.384399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.388669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.388960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.388995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.393298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.393562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.393590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.397903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.398155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.398200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.402477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.402733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.402761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.406948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.407244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.411565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.411853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.411883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.416281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.416545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.416572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.421005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.421286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.421314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.425646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.425952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.430565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.430901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.430929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.435134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.435385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.435413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.439618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.439876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.439905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-15 10:40:34.444253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.059 [2024-07-15 10:40:34.444524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-15 10:40:34.444553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.448849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.449103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.449132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.453338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.453626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.453655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.457912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.458178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.458206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.462537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.462838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.462867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.467177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.467444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.467474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.471983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.472252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.472280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.476607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.476864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.476894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.481699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.481997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.482027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.487778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.488064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.493107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.493372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.493401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.499076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.499341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.499370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.503905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.504177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.504205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.508508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.508774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.508829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.513147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.513423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.513455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.518030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.518283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.518313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.522533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.522784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.522824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.527044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.527339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.527366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.531689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.531979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.532009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.536366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.536641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.536669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.540969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.541250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.541278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.545601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.545890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.545919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.550249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.550523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.550551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.554921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.555246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.555274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.559956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.560232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.560260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.565760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.566063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.566093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.571902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.572187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.572215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.578225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.578570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.578613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.584973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.060 [2024-07-15 10:40:34.585302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-15 10:40:34.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.060 [2024-07-15 10:40:34.591574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.061 [2024-07-15 10:40:34.591878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.061 [2024-07-15 10:40:34.591908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.061 [2024-07-15 10:40:34.598478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.061 [2024-07-15 10:40:34.598818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.061 [2024-07-15 10:40:34.598848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.061 [2024-07-15 10:40:34.604842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.061 [2024-07-15 10:40:34.605229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.061 [2024-07-15 10:40:34.605261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.611856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.612212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.612257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.618871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.619192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.619222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.625990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.626292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.626322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.632341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.632637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.632668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.637714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.637978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.638009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.642584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.642877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.642908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.320 [2024-07-15 10:40:34.647400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.320 [2024-07-15 10:40:34.647652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-15 10:40:34.647683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.653415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.653749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.653778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.659203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.659470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.659505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.666139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.666410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.666441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.672237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.672553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.672582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.678351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.678678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.678721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.684482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.684764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.684818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.691516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.691786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.691825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.698655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.698957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.698987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.705688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.705981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.706027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.712484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.712776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.712816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.719439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.719728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.726572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.726881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.726912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.733829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.734097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.734127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.740839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.741144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.741175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.747719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.748004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.748034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.754659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.754968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.754998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.760849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.761117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.761146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.765815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.766073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.766117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.770437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.770688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.770718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.775059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.775310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.775340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.779654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.779942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.779972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.784388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.784669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.784699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.789076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.789385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.789413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.793891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.794153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.794183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.798479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.798745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.798774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.803064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.803314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.803359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.807689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.807953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.807982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.812332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.812596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.321 [2024-07-15 10:40:34.812634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.321 [2024-07-15 10:40:34.816987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.321 [2024-07-15 10:40:34.817251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.817280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.821589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.821847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.821877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.826205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.826468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.826497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.830758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.831032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.831077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.835421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.835717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.835746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.840059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.840371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.844663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.844920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.844950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.850236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.850519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.850549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.855670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.855931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.855961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.860282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.860565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.860595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.322 [2024-07-15 10:40:34.865526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.322 [2024-07-15 10:40:34.865780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.322 [2024-07-15 10:40:34.865838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.581 [2024-07-15 10:40:34.871394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.871671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.871702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.877409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.877668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.877719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.882162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.882426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.882456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.887142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.887394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.887424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.891779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.892041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.892071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.896430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.896725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.896760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.901134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.901397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.901426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.906837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.907117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.907146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.912233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.912498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.912528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.916920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.917186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.921647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.921935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.921966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.926311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.926563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.926593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.930994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.931245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.931274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.935780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.936038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.936069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.940471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.940741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.940770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.945254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.945507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.945552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.950032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.950285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.950315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.954868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.955125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.955155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.959676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.959934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.959964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.964877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.965129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.965159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.970049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.970312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.970342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.975284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.975549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.975579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.981204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.981468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.981498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.986474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.986746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.986775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.991206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.991473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.991502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:34.995786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:34.996050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:34.996080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:35.000401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:35.000650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:35.000680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:35.005613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.582 [2024-07-15 10:40:35.005907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.582 [2024-07-15 10:40:35.005938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.582 [2024-07-15 10:40:35.011812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.012092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.012122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.017775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.018037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.018068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.024605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.024907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.024938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.031189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.031494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.031529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.037161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.037458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.037502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.043386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.043704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.043749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.049669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.050046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.050075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.055996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.056365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.056395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.062147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.062473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.062502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.068388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.068690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.068721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.075484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.075767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.075798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.081840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.082190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.082234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.087906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.088289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.088318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.094421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.094721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.094751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.101130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.101448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.101493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.107850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.108168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.108211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.114557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.115093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.115122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.121752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.122036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.122066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.583 [2024-07-15 10:40:35.128360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.583 [2024-07-15 10:40:35.128669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.583 [2024-07-15 10:40:35.128701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.134001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.134269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.134300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.138908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.139166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.139197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.143582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.143875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.143905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.148817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.149169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.154890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.155210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.155239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.160913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.161216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.161245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.167659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.167969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.168000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.173990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.174255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.174301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.179358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.179625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.184260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.184527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.188961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.189231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.189268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.193586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.193845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.193875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.198236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.198503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.198533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.202849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.203103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.203147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.207533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.207898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.212252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.212514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.212542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.216898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.217172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.217202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.222254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.222519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.222548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.227551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.227811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.227845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.232740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.233011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.233057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.238792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.239095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.239139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.246044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.246341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.842 [2024-07-15 10:40:35.246386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.842 [2024-07-15 10:40:35.253021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.842 [2024-07-15 10:40:35.253359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.253393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.259914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.260251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.267016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.267350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.267394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.274110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.274436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.274465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.281240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.281598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.281627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.288421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.288740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.295272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.295599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.295628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.301769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.302052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.302082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.308950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.309260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.309289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.315699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.315987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.316017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.322686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.323032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.323062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.329641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.329987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.330017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.843 [2024-07-15 10:40:35.335939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102eaf0) with pdu=0x2000190fef90 00:23:46.843 [2024-07-15 10:40:35.336027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.843 [2024-07-15 10:40:35.336056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.843 00:23:46.843 Latency(us) 00:23:46.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.843 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:46.843 nvme0n1 : 2.00 5758.02 719.75 0.00 0.00 2770.60 2123.85 7330.32 00:23:46.843 =================================================================================================================== 00:23:46.843 Total : 5758.02 719.75 0.00 0.00 2770.60 2123.85 7330.32 00:23:46.843 0 00:23:46.843 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:46.843 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:46.843 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:46.843 | .driver_specific 00:23:46.843 | .nvme_error 00:23:46.843 | .status_code 00:23:46.843 | .command_transient_transport_error' 00:23:46.843 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1297732 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1297732 ']' 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1297732 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297732 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297732' 00:23:47.100 killing process with pid 1297732 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1297732 00:23:47.100 Received shutdown signal, test time was about 2.000000 seconds 00:23:47.100 00:23:47.100 Latency(us) 00:23:47.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.100 =================================================================================================================== 00:23:47.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.100 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1297732 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1296341 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1296341 ']' 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1296341 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296341 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296341' 00:23:47.664 killing process with pid 1296341 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1296341 00:23:47.664 10:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1296341 00:23:47.664 00:23:47.664 real 0m15.483s 00:23:47.664 user 0m30.937s 00:23:47.664 sys 0m4.093s 00:23:47.664 10:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.664 10:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:47.664 ************************************ 00:23:47.664 END TEST nvmf_digest_error 00:23:47.664 ************************************ 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.923 rmmod nvme_tcp 00:23:47.923 rmmod nvme_fabrics 00:23:47.923 rmmod nvme_keyring 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1296341 ']' 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1296341 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1296341 ']' 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1296341 00:23:47.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1296341) - No such process 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1296341 is not found' 00:23:47.923 Process with pid 1296341 is not found 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.923 10:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.831 10:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.831 00:23:49.831 real 0m35.271s 00:23:49.831 user 1m1.156s 00:23:49.831 sys 0m10.171s 00:23:49.831 10:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.831 10:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:49.831 ************************************ 00:23:49.831 END TEST nvmf_digest 00:23:49.831 ************************************ 00:23:49.831 10:40:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:49.831 10:40:38 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:23:49.831 10:40:38 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:23:49.831 10:40:38 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:23:49.831 10:40:38 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:49.831 10:40:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:49.831 10:40:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.831 10:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.089 ************************************ 00:23:50.089 START TEST nvmf_bdevperf 00:23:50.089 ************************************ 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:50.089 * Looking for test storage... 00:23:50.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.089 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.090 10:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.990 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:51.990 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:51.991 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:51.991 Found net devices under 0000:09:00.0: cvl_0_0 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:51.991 Found net devices under 0000:09:00.1: cvl_0_1 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.991 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:23:52.249 00:23:52.249 --- 10.0.0.2 ping statistics --- 00:23:52.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.249 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:23:52.249 00:23:52.249 --- 10.0.0.1 ping statistics --- 00:23:52.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.249 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1300193 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1300193 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1300193 ']' 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.249 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.250 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.250 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.250 [2024-07-15 10:40:40.665889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:52.250 [2024-07-15 10:40:40.665996] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.250 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.250 [2024-07-15 10:40:40.728887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:52.508 [2024-07-15 10:40:40.833954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.508 [2024-07-15 10:40:40.834004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.508 [2024-07-15 10:40:40.834028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.508 [2024-07-15 10:40:40.834039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.508 [2024-07-15 10:40:40.834049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.508 [2024-07-15 10:40:40.834130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.508 [2024-07-15 10:40:40.834195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.508 [2024-07-15 10:40:40.834198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 [2024-07-15 10:40:40.960498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 Malloc0 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.508 10:40:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 [2024-07-15 10:40:41.019588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.508 { 00:23:52.508 "params": { 00:23:52.508 "name": "Nvme$subsystem", 00:23:52.508 "trtype": "$TEST_TRANSPORT", 00:23:52.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.508 "adrfam": "ipv4", 00:23:52.508 "trsvcid": "$NVMF_PORT", 00:23:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.508 "hdgst": ${hdgst:-false}, 00:23:52.508 "ddgst": ${ddgst:-false} 00:23:52.508 }, 00:23:52.508 "method": "bdev_nvme_attach_controller" 00:23:52.508 } 00:23:52.508 EOF 00:23:52.508 )") 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:52.508 10:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:52.508 "params": { 00:23:52.508 "name": "Nvme1", 00:23:52.508 "trtype": "tcp", 00:23:52.508 "traddr": "10.0.0.2", 00:23:52.508 "adrfam": "ipv4", 00:23:52.508 "trsvcid": "4420", 00:23:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.508 "hdgst": false, 00:23:52.508 "ddgst": false 00:23:52.508 }, 00:23:52.508 "method": "bdev_nvme_attach_controller" 00:23:52.508 }' 00:23:52.766 [2024-07-15 10:40:41.064325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:52.766 [2024-07-15 10:40:41.064412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300219 ] 00:23:52.766 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.766 [2024-07-15 10:40:41.124633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.766 [2024-07-15 10:40:41.236849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.024 Running I/O for 1 seconds... 00:23:53.957 00:23:53.957 Latency(us) 00:23:53.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.957 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:53.957 Verification LBA range: start 0x0 length 0x4000 00:23:53.957 Nvme1n1 : 1.01 8687.66 33.94 0.00 0.00 14671.57 1334.99 15049.01 00:23:53.957 =================================================================================================================== 00:23:53.957 Total : 8687.66 33.94 0.00 0.00 14671.57 1334.99 15049.01 00:23:54.215 10:40:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1300482 00:23:54.215 10:40:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:54.215 10:40:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.216 { 00:23:54.216 "params": { 00:23:54.216 "name": "Nvme$subsystem", 00:23:54.216 "trtype": "$TEST_TRANSPORT", 00:23:54.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.216 "adrfam": "ipv4", 00:23:54.216 "trsvcid": "$NVMF_PORT", 00:23:54.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.216 "hdgst": ${hdgst:-false}, 00:23:54.216 "ddgst": ${ddgst:-false} 00:23:54.216 }, 00:23:54.216 "method": "bdev_nvme_attach_controller" 00:23:54.216 } 00:23:54.216 EOF 00:23:54.216 )") 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:54.216 10:40:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:54.216 "params": { 00:23:54.216 "name": "Nvme1", 00:23:54.216 "trtype": "tcp", 00:23:54.216 "traddr": "10.0.0.2", 00:23:54.216 "adrfam": "ipv4", 00:23:54.216 "trsvcid": "4420", 00:23:54.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.216 "hdgst": false, 00:23:54.216 "ddgst": false 00:23:54.216 }, 00:23:54.216 "method": "bdev_nvme_attach_controller" 00:23:54.216 }' 00:23:54.216 [2024-07-15 10:40:42.743111] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.216 [2024-07-15 10:40:42.743208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300482 ] 00:23:54.473 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.474 [2024-07-15 10:40:42.802753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.474 [2024-07-15 10:40:42.910764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.731 Running I/O for 15 seconds... 00:23:57.264 10:40:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1300193 00:23:57.264 10:40:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:57.264 [2024-07-15 10:40:45.715651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.264 [2024-07-15 10:40:45.715720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.715751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.715769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.715809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.715838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.715855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.715871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.715889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.715906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.715922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.715940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.715959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.715983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.264 [2024-07-15 10:40:45.716853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.264 [2024-07-15 10:40:45.716868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.716884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.716899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.716915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.716929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.716944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.716959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.716974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.716988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.717977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.717997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.718012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.718028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.265 [2024-07-15 10:40:45.718043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.265 [2024-07-15 10:40:45.718060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.718982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.718998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.266 [2024-07-15 10:40:45.719298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.266 [2024-07-15 10:40:45.719330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.266 [2024-07-15 10:40:45.719358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.266 [2024-07-15 10:40:45.719371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.267 [2024-07-15 10:40:45.719677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b254c0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.719706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.267 [2024-07-15 10:40:45.719718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.267 [2024-07-15 10:40:45.719727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47928 len:8 PRP1 0x0 PRP2 0x0 00:23:57.267 [2024-07-15 10:40:45.719740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.267 [2024-07-15 10:40:45.719832] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b254c0 was disconnected and freed. reset controller. 00:23:57.267 [2024-07-15 10:40:45.723173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.723251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.723888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.723918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.723936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.724181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.724376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.724395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.724415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.727544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.267 [2024-07-15 10:40:45.736899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.737323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.737352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.737369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.737611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.737861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.737882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.737896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.741029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.267 [2024-07-15 10:40:45.750228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.750629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.750683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.750699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.750962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.751177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.751197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.751210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.754236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.267 [2024-07-15 10:40:45.763576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.763921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.763948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.763964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.764215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.764440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.764460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.764472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.767808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.267 [2024-07-15 10:40:45.776958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.777345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.777373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.777389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.777626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.777885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.777908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.777923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.781062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.267 [2024-07-15 10:40:45.790384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.790738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.790766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.790782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.791005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.791240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.791260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.791273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.794412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.267 [2024-07-15 10:40:45.803742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.267 [2024-07-15 10:40:45.804130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.267 [2024-07-15 10:40:45.804173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.267 [2024-07-15 10:40:45.804190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.267 [2024-07-15 10:40:45.804445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.267 [2024-07-15 10:40:45.804654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.267 [2024-07-15 10:40:45.804675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.267 [2024-07-15 10:40:45.804688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.267 [2024-07-15 10:40:45.808121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.577 [2024-07-15 10:40:45.818372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.577 [2024-07-15 10:40:45.818813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.577 [2024-07-15 10:40:45.818855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.577 [2024-07-15 10:40:45.818885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.577 [2024-07-15 10:40:45.819200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.577 [2024-07-15 10:40:45.819497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.577 [2024-07-15 10:40:45.819527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.577 [2024-07-15 10:40:45.819552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.577 [2024-07-15 10:40:45.823536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.577 [2024-07-15 10:40:45.831753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.577 [2024-07-15 10:40:45.832149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.577 [2024-07-15 10:40:45.832179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.577 [2024-07-15 10:40:45.832210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.577 [2024-07-15 10:40:45.832428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.577 [2024-07-15 10:40:45.832649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.577 [2024-07-15 10:40:45.832668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.577 [2024-07-15 10:40:45.832681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.577 [2024-07-15 10:40:45.835720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.577 [2024-07-15 10:40:45.845208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.577 [2024-07-15 10:40:45.845584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.577 [2024-07-15 10:40:45.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.577 [2024-07-15 10:40:45.845627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.577 [2024-07-15 10:40:45.845858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.577 [2024-07-15 10:40:45.846077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.577 [2024-07-15 10:40:45.846113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.577 [2024-07-15 10:40:45.846126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.577 [2024-07-15 10:40:45.849078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.577 [2024-07-15 10:40:45.858404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.577 [2024-07-15 10:40:45.858811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.577 [2024-07-15 10:40:45.858854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.577 [2024-07-15 10:40:45.858871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.577 [2024-07-15 10:40:45.859101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.577 [2024-07-15 10:40:45.859307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.577 [2024-07-15 10:40:45.859326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.577 [2024-07-15 10:40:45.859344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.577 [2024-07-15 10:40:45.862396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.577 [2024-07-15 10:40:45.871714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.577 [2024-07-15 10:40:45.872125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.577 [2024-07-15 10:40:45.872153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.872168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.872385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.872588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.872607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.872619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.875594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.884897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.885355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.885409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.885424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.885672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.885904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.885925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.885939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.888874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.898144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.898551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.898579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.898595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.898847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.899040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.899059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.899073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.902030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.911418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.911892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.911926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.911943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.912186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.912373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.912393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.912405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.915288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.924538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.924943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.924971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.924986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.925217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.925419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.925439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.925452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.928350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.937638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.938054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.938082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.938099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.938332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.938535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.938555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.938567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.941507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.950863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.951214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.951241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.951257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.951475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.951684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.951703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.951716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.954604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.963901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.964272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.964300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.964315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.964532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.964736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.964755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.964767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.967669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.977284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.977659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.977687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.977702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.977967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.978215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.978250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.978263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.981394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:45.990685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:45.991057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:45.991086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:45.991103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:45.991357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:45.991557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:45.991578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:45.991592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:45.994824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:46.004059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:46.004531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.578 [2024-07-15 10:40:46.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.578 [2024-07-15 10:40:46.004603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.578 [2024-07-15 10:40:46.004858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.578 [2024-07-15 10:40:46.005067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.578 [2024-07-15 10:40:46.005087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.578 [2024-07-15 10:40:46.005099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.578 [2024-07-15 10:40:46.008021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.578 [2024-07-15 10:40:46.017147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.578 [2024-07-15 10:40:46.017551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.017579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.017594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.017842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.018055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.018076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.018090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.020977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.579 [2024-07-15 10:40:46.030243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.579 [2024-07-15 10:40:46.030646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.030673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.030689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.030937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.031163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.031183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.031196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.034056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.579 [2024-07-15 10:40:46.043437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.579 [2024-07-15 10:40:46.043779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.043829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.043851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.044086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.044290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.044309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.044323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.047238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.579 [2024-07-15 10:40:46.056423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.579 [2024-07-15 10:40:46.056830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.056858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.056874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.057111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.057314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.057334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.057346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.060233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.579 [2024-07-15 10:40:46.069559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.579 [2024-07-15 10:40:46.069933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.069961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.069977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.070200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.070402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.070422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.070435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.073347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.579 [2024-07-15 10:40:46.082630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.579 [2024-07-15 10:40:46.083058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.083088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.083104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.083339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.083544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.083568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.083582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.086498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.579 [2024-07-15 10:40:46.095809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.579 [2024-07-15 10:40:46.096262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.579 [2024-07-15 10:40:46.096301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.579 [2024-07-15 10:40:46.096329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.579 [2024-07-15 10:40:46.096632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.579 [2024-07-15 10:40:46.096942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.579 [2024-07-15 10:40:46.096973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.579 [2024-07-15 10:40:46.096998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.579 [2024-07-15 10:40:46.100999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.109245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.109707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.109759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.109776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.110056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.110277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.110297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.838 [2024-07-15 10:40:46.110310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.838 [2024-07-15 10:40:46.113310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.122420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.122796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.122831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.122848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.123065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.123268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.123288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.838 [2024-07-15 10:40:46.123301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.838 [2024-07-15 10:40:46.126201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.135524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.135845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.135875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.135891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.136109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.136312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.136332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.838 [2024-07-15 10:40:46.136345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.838 [2024-07-15 10:40:46.139250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.148546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.148953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.148982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.148998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.149233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.149436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.149456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.838 [2024-07-15 10:40:46.149469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.838 [2024-07-15 10:40:46.152394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.161601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.162017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.162045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.162061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.162298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.162501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.162521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.838 [2024-07-15 10:40:46.162534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.838 [2024-07-15 10:40:46.165463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.174808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.175128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.175154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.175169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.175386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.175589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.175608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.838 [2024-07-15 10:40:46.175621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.838 [2024-07-15 10:40:46.178538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.838 [2024-07-15 10:40:46.187932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.838 [2024-07-15 10:40:46.188286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.838 [2024-07-15 10:40:46.188314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.838 [2024-07-15 10:40:46.188330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.838 [2024-07-15 10:40:46.188545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.838 [2024-07-15 10:40:46.188749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.838 [2024-07-15 10:40:46.188769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.188781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.191724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.200980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.201292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.201321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.201336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.201552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.201755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.201775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.201788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.204728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.213990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.214332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.214360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.214376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.214611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.214841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.214879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.214900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.217782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.227334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.227665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.227693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.227709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.227955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.228196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.228215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.228228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.231325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.240625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.241056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.241091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.241107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.241343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.241545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.241564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.241576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.244458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.254098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.254423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.254450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.254465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.254682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.254932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.254953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.254966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.258021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.267444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.267808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.267847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.267864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.268092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.268304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.268324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.268338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.271403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.280693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.281030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.281059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.281075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.281315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.281525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.281545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.281558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.284559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.293955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.294328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.294357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.294373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.294615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.294848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.294869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.294883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.297901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.307966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.308453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.308495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.308524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.308856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.309184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.309214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.309235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.313106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.321325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.321709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.321739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.321756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.322012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.322247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.322267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.839 [2024-07-15 10:40:46.322281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.839 [2024-07-15 10:40:46.325244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.839 [2024-07-15 10:40:46.334635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.839 [2024-07-15 10:40:46.334991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.839 [2024-07-15 10:40:46.335021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.839 [2024-07-15 10:40:46.335039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.839 [2024-07-15 10:40:46.335294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.839 [2024-07-15 10:40:46.335488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.839 [2024-07-15 10:40:46.335509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.840 [2024-07-15 10:40:46.335521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.840 [2024-07-15 10:40:46.338510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.840 [2024-07-15 10:40:46.347847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.840 [2024-07-15 10:40:46.348210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.840 [2024-07-15 10:40:46.348238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.840 [2024-07-15 10:40:46.348254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.840 [2024-07-15 10:40:46.348490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.840 [2024-07-15 10:40:46.348698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.840 [2024-07-15 10:40:46.348719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.840 [2024-07-15 10:40:46.348732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.840 [2024-07-15 10:40:46.351743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.840 [2024-07-15 10:40:46.361199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.840 [2024-07-15 10:40:46.361614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.840 [2024-07-15 10:40:46.361643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.840 [2024-07-15 10:40:46.361660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.840 [2024-07-15 10:40:46.361915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.840 [2024-07-15 10:40:46.362135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.840 [2024-07-15 10:40:46.362154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.840 [2024-07-15 10:40:46.362168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.840 [2024-07-15 10:40:46.365144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.840 [2024-07-15 10:40:46.374437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.840 [2024-07-15 10:40:46.374821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.840 [2024-07-15 10:40:46.374849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:57.840 [2024-07-15 10:40:46.374865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:57.840 [2024-07-15 10:40:46.375089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:57.840 [2024-07-15 10:40:46.375299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.840 [2024-07-15 10:40:46.375320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.840 [2024-07-15 10:40:46.375333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.840 [2024-07-15 10:40:46.378301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.099 [2024-07-15 10:40:46.388207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.099 [2024-07-15 10:40:46.388590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.099 [2024-07-15 10:40:46.388620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.099 [2024-07-15 10:40:46.388636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.099 [2024-07-15 10:40:46.388870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.099 [2024-07-15 10:40:46.389075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.099 [2024-07-15 10:40:46.389110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.099 [2024-07-15 10:40:46.389125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.099 [2024-07-15 10:40:46.392347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.099 [2024-07-15 10:40:46.401518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.099 [2024-07-15 10:40:46.401905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.099 [2024-07-15 10:40:46.401940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.099 [2024-07-15 10:40:46.401958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.099 [2024-07-15 10:40:46.402201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.099 [2024-07-15 10:40:46.402396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.099 [2024-07-15 10:40:46.402417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.099 [2024-07-15 10:40:46.402430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.099 [2024-07-15 10:40:46.405430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.099 [2024-07-15 10:40:46.414711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.099 [2024-07-15 10:40:46.415093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.099 [2024-07-15 10:40:46.415123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.099 [2024-07-15 10:40:46.415139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.099 [2024-07-15 10:40:46.415382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.099 [2024-07-15 10:40:46.415592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.099 [2024-07-15 10:40:46.415612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.099 [2024-07-15 10:40:46.415625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.099 [2024-07-15 10:40:46.418627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.099 [2024-07-15 10:40:46.428053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.099 [2024-07-15 10:40:46.428388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.428416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.428432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.428649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.428899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.428922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.428936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.431917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.441412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.441696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.441737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.441753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.442034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.442274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.442295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.442308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.445264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.454681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.455056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.455084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.455101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.455353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.455563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.455583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.455597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.458581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.468036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.468404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.468432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.468448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.468684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.468937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.468959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.468973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.472149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.481568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.481886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.481914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.481930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.482153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.482379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.482399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.482412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.485408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.494752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.495268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.495296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.495312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.495549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.495758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.495779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.495792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.498815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.508099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.508452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.508481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.508497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.508741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.508981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.509004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.509017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.512006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.521313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.521681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.521708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.521724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.522003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.522233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.522253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.522267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.525224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.534519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.534849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.534897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.535106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.535315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.535335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.535348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.538349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.547860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.548172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.548199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.548215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.548417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.548628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.548648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.548662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.551674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.561136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.100 [2024-07-15 10:40:46.561455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.100 [2024-07-15 10:40:46.561482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.100 [2024-07-15 10:40:46.561498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.100 [2024-07-15 10:40:46.561719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.100 [2024-07-15 10:40:46.561962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.100 [2024-07-15 10:40:46.561984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.100 [2024-07-15 10:40:46.561998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.100 [2024-07-15 10:40:46.564976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.100 [2024-07-15 10:40:46.574426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.101 [2024-07-15 10:40:46.574772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.101 [2024-07-15 10:40:46.574807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.101 [2024-07-15 10:40:46.574842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.101 [2024-07-15 10:40:46.575072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.101 [2024-07-15 10:40:46.575302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.101 [2024-07-15 10:40:46.575326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.101 [2024-07-15 10:40:46.575340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.101 [2024-07-15 10:40:46.578300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.101 [2024-07-15 10:40:46.587759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.101 [2024-07-15 10:40:46.588141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.101 [2024-07-15 10:40:46.588170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.101 [2024-07-15 10:40:46.588187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.101 [2024-07-15 10:40:46.588433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.101 [2024-07-15 10:40:46.588641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.101 [2024-07-15 10:40:46.588661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.101 [2024-07-15 10:40:46.588674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.101 [2024-07-15 10:40:46.591672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.101 [2024-07-15 10:40:46.600898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.101 [2024-07-15 10:40:46.601261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.101 [2024-07-15 10:40:46.601289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.101 [2024-07-15 10:40:46.601305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.101 [2024-07-15 10:40:46.601526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.101 [2024-07-15 10:40:46.601734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.101 [2024-07-15 10:40:46.601754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.101 [2024-07-15 10:40:46.601768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.101 [2024-07-15 10:40:46.604781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.101 [2024-07-15 10:40:46.614034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.101 [2024-07-15 10:40:46.614400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.101 [2024-07-15 10:40:46.614429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.101 [2024-07-15 10:40:46.614445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.101 [2024-07-15 10:40:46.614682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.101 [2024-07-15 10:40:46.614936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.101 [2024-07-15 10:40:46.614958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.101 [2024-07-15 10:40:46.614972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.101 [2024-07-15 10:40:46.617947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.101 [2024-07-15 10:40:46.627234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.101 [2024-07-15 10:40:46.627651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.101 [2024-07-15 10:40:46.627680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.101 [2024-07-15 10:40:46.627697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.101 [2024-07-15 10:40:46.627949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.101 [2024-07-15 10:40:46.628181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.101 [2024-07-15 10:40:46.628201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.101 [2024-07-15 10:40:46.628214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.101 [2024-07-15 10:40:46.631174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.101 [2024-07-15 10:40:46.640464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.101 [2024-07-15 10:40:46.640897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.101 [2024-07-15 10:40:46.640926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.101 [2024-07-15 10:40:46.640942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.101 [2024-07-15 10:40:46.641187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.101 [2024-07-15 10:40:46.641394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.101 [2024-07-15 10:40:46.641414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.101 [2024-07-15 10:40:46.641427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.101 [2024-07-15 10:40:46.644599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.654058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.654437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.654469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.654486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.654732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.654991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.655013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.655027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.657998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.667393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.667811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.667840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.667856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.668104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.668313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.668334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.668346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.671341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.680618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.680993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.681022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.681038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.681291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.681483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.681503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.681517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.684518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.693854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.694236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.694264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.694280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.694503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.694714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.694734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.694747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.697748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.707209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.707632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.707661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.707677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.707919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.708168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.708189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.708207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.711166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.720520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.720872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.720901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.720918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.721160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.721353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.721372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.721385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.724525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.734257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.734587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.734615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.734631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.734865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.735080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.735100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.735114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.738127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.747459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.747877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.747907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.747924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.748168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.748378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.748398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.748411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.751414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.760689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.761069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.761102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.761119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.761364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.761573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.360 [2024-07-15 10:40:46.761594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.360 [2024-07-15 10:40:46.761607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.360 [2024-07-15 10:40:46.764602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.360 [2024-07-15 10:40:46.774046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.360 [2024-07-15 10:40:46.774479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.360 [2024-07-15 10:40:46.774509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.360 [2024-07-15 10:40:46.774525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.360 [2024-07-15 10:40:46.774769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.360 [2024-07-15 10:40:46.775017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.775040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.775054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.778031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.787366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.787780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.787816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.787834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.788079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.788288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.788308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.788321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.791359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.800693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.801067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.801096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.801127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.801363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.801575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.801595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.801608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.804615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.813958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.814268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.814311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.814327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.814543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.814752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.814773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.814811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.817807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.827231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.827644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.827672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.827687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.827938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.828172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.828193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.828207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.831203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.840428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.840760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.840788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.840825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.841061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.841288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.841309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.841322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.844285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.853929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.854323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.854352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.854368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.854610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.854863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.854886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.854901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.857991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.867182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.867592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.867621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.867637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.867891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.868112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.868132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.868161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.871240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.880412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.880791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.880825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.880857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.881103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.881315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.881335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.881348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.884332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.893682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.894071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.894100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.894143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.894376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.894585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.894604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.894617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.361 [2024-07-15 10:40:46.897660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.361 [2024-07-15 10:40:46.907481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.361 [2024-07-15 10:40:46.907876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.361 [2024-07-15 10:40:46.907907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.361 [2024-07-15 10:40:46.907923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.361 [2024-07-15 10:40:46.908184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.361 [2024-07-15 10:40:46.908438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.361 [2024-07-15 10:40:46.908462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.361 [2024-07-15 10:40:46.908492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.620 [2024-07-15 10:40:46.911675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.620 [2024-07-15 10:40:46.920787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.620 [2024-07-15 10:40:46.921207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.620 [2024-07-15 10:40:46.921238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.620 [2024-07-15 10:40:46.921255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.620 [2024-07-15 10:40:46.921498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.620 [2024-07-15 10:40:46.921729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.620 [2024-07-15 10:40:46.921750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.620 [2024-07-15 10:40:46.921763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.620 [2024-07-15 10:40:46.925020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.620 [2024-07-15 10:40:46.934266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.620 [2024-07-15 10:40:46.934620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.620 [2024-07-15 10:40:46.934650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.620 [2024-07-15 10:40:46.934667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.620 [2024-07-15 10:40:46.934894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.620 [2024-07-15 10:40:46.935127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.620 [2024-07-15 10:40:46.935166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.620 [2024-07-15 10:40:46.935180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.620 [2024-07-15 10:40:46.938354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.620 [2024-07-15 10:40:46.947708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.620 [2024-07-15 10:40:46.948034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.620 [2024-07-15 10:40:46.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.620 [2024-07-15 10:40:46.948081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.620 [2024-07-15 10:40:46.948338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.620 [2024-07-15 10:40:46.948554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.620 [2024-07-15 10:40:46.948589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.620 [2024-07-15 10:40:46.948603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.620 [2024-07-15 10:40:46.951760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.620 [2024-07-15 10:40:46.961034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.620 [2024-07-15 10:40:46.961403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.620 [2024-07-15 10:40:46.961431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.620 [2024-07-15 10:40:46.961448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:46.961671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:46.961915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:46.961938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:46.961951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:46.964986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:46.974360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:46.974773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:46.974809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:46.974827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:46.975043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:46.975278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:46.975298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:46.975311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:46.978612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:46.987758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:46.988084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:46.988129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:46.988145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:46.988380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:46.988589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:46.988609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:46.988621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:46.991722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.001114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.001541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.001569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.001584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.001830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.002052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.002073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.002086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.005135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.014466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.014887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.014916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.014932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.015162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.015373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.015392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.015405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.018459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.027744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.028143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.028172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.028193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.028435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.028642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.028662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.028676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.031710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.041034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.041491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.041520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.041537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.041778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.042012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.042033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.042047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.045044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.054328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.054632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.054659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.054675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.054935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.055163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.055183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.055196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.058170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.067656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.068035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.068065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.068081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.068324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.068533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.068557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.068571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.071574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.080855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.081247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.081274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.081290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.081511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.081720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.081740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.081753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.084758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.094177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.094532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.094562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.621 [2024-07-15 10:40:47.094578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.621 [2024-07-15 10:40:47.094833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.621 [2024-07-15 10:40:47.095054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.621 [2024-07-15 10:40:47.095076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.621 [2024-07-15 10:40:47.095104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.621 [2024-07-15 10:40:47.098066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.621 [2024-07-15 10:40:47.107346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.621 [2024-07-15 10:40:47.107758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.621 [2024-07-15 10:40:47.107809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.622 [2024-07-15 10:40:47.107829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.622 [2024-07-15 10:40:47.108073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.622 [2024-07-15 10:40:47.108301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.622 [2024-07-15 10:40:47.108322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.622 [2024-07-15 10:40:47.108334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.622 [2024-07-15 10:40:47.111293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.622 [2024-07-15 10:40:47.120550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.622 [2024-07-15 10:40:47.120962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.622 [2024-07-15 10:40:47.120992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.622 [2024-07-15 10:40:47.121008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.622 [2024-07-15 10:40:47.121250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.622 [2024-07-15 10:40:47.121458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.622 [2024-07-15 10:40:47.121479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.622 [2024-07-15 10:40:47.121492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.622 [2024-07-15 10:40:47.124485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.622 [2024-07-15 10:40:47.133808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.622 [2024-07-15 10:40:47.134172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.622 [2024-07-15 10:40:47.134202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.622 [2024-07-15 10:40:47.134218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.622 [2024-07-15 10:40:47.134461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.622 [2024-07-15 10:40:47.134654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.622 [2024-07-15 10:40:47.134674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.622 [2024-07-15 10:40:47.134687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.622 [2024-07-15 10:40:47.137651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.622 [2024-07-15 10:40:47.147060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.622 [2024-07-15 10:40:47.147478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.622 [2024-07-15 10:40:47.147505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.622 [2024-07-15 10:40:47.147521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.622 [2024-07-15 10:40:47.147755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.622 [2024-07-15 10:40:47.147976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.622 [2024-07-15 10:40:47.147998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.622 [2024-07-15 10:40:47.148012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.622 [2024-07-15 10:40:47.151007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.622 [2024-07-15 10:40:47.160271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.622 [2024-07-15 10:40:47.160589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.622 [2024-07-15 10:40:47.160617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.622 [2024-07-15 10:40:47.160632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.622 [2024-07-15 10:40:47.160883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.622 [2024-07-15 10:40:47.161111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.622 [2024-07-15 10:40:47.161132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.622 [2024-07-15 10:40:47.161158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.622 [2024-07-15 10:40:47.164045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.173534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.173962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.174001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.174030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.174282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.174546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.174569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.174583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.177509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.186598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.186980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.187054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.187071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.187304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.187493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.187513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.187526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.190429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.199680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.200094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.200123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.200139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.200375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.200577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.200598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.200615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.203533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.212747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.213098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.213127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.213143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.213378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.213582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.213602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.213615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.216522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.225775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.226164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.226193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.226209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.226426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.226630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.226650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.226662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.229771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.239286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.239685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.239714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.239731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.239995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.240221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.240241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.240253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.243197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.252427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.252833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.252875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.252892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.253128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.253332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.253352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.253365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.256253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.265594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.265963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.265990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.266006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.881 [2024-07-15 10:40:47.266222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.881 [2024-07-15 10:40:47.266426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.881 [2024-07-15 10:40:47.266446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.881 [2024-07-15 10:40:47.266459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.881 [2024-07-15 10:40:47.269361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.881 [2024-07-15 10:40:47.278740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.881 [2024-07-15 10:40:47.279144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.881 [2024-07-15 10:40:47.279198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.881 [2024-07-15 10:40:47.279214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.279465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.279652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.279672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.279685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.282620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.291958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.292372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.292429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.292445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.292688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.292912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.292934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.292948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.295872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.304982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.305336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.305399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.305430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.305662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.305878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.305900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.305913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.308765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.318132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.318530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.318583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.318598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.318850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.319065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.319085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.319112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.321987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.331257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.331667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.331721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.331736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.331974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.332183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.332203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.332216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.335083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.344350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.344695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.344724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.344740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.345005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.345230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.345250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.345262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.348136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.357391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.357759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.357788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.357827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.358052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.358274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.358294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.358306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.361182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.370477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.370821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.370851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.370868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.371082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.371284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.371304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.371318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.374222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.383638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.384050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.384078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.384099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.384334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.384536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.384556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.384569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.387473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.396799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.397113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.397140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.882 [2024-07-15 10:40:47.397154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.882 [2024-07-15 10:40:47.397371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.882 [2024-07-15 10:40:47.397573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.882 [2024-07-15 10:40:47.397593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.882 [2024-07-15 10:40:47.397605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.882 [2024-07-15 10:40:47.400484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.882 [2024-07-15 10:40:47.409945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.882 [2024-07-15 10:40:47.410290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.882 [2024-07-15 10:40:47.410318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.883 [2024-07-15 10:40:47.410334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.883 [2024-07-15 10:40:47.410571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.883 [2024-07-15 10:40:47.410775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.883 [2024-07-15 10:40:47.410819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.883 [2024-07-15 10:40:47.410832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.883 [2024-07-15 10:40:47.413645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.883 [2024-07-15 10:40:47.423137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.883 [2024-07-15 10:40:47.423478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.883 [2024-07-15 10:40:47.423507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:58.883 [2024-07-15 10:40:47.423523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:58.883 [2024-07-15 10:40:47.423758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:58.883 [2024-07-15 10:40:47.423981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.883 [2024-07-15 10:40:47.424007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.883 [2024-07-15 10:40:47.424020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.883 [2024-07-15 10:40:47.427115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.142 [2024-07-15 10:40:47.436557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.142 [2024-07-15 10:40:47.436927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.142 [2024-07-15 10:40:47.436957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.142 [2024-07-15 10:40:47.436973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.142 [2024-07-15 10:40:47.437209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.142 [2024-07-15 10:40:47.437412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.142 [2024-07-15 10:40:47.437432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.142 [2024-07-15 10:40:47.437445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.142 [2024-07-15 10:40:47.440289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.142 [2024-07-15 10:40:47.449678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.142 [2024-07-15 10:40:47.450055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.142 [2024-07-15 10:40:47.450085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.142 [2024-07-15 10:40:47.450101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.142 [2024-07-15 10:40:47.450351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.142 [2024-07-15 10:40:47.450553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.142 [2024-07-15 10:40:47.450573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.142 [2024-07-15 10:40:47.450587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.142 [2024-07-15 10:40:47.453521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.142 [2024-07-15 10:40:47.462695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.142 [2024-07-15 10:40:47.463109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.142 [2024-07-15 10:40:47.463137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.142 [2024-07-15 10:40:47.463153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.142 [2024-07-15 10:40:47.463387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.142 [2024-07-15 10:40:47.463575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.142 [2024-07-15 10:40:47.463595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.142 [2024-07-15 10:40:47.463608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.142 [2024-07-15 10:40:47.466493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.142 [2024-07-15 10:40:47.475712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.142 [2024-07-15 10:40:47.476072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.142 [2024-07-15 10:40:47.476113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.142 [2024-07-15 10:40:47.476129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.142 [2024-07-15 10:40:47.476348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.142 [2024-07-15 10:40:47.476551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.142 [2024-07-15 10:40:47.476569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.142 [2024-07-15 10:40:47.476582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.479593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.489208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.489609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.489638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.489654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.489899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.490141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.490161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.490174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.493109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.502338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.502741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.502768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.502784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.503026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.503246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.503267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.503279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.506224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.515468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.515876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.515905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.515920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.516163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.516366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.516386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.516398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.519286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.528463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.528912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.528941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.528957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.529204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.529391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.529410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.529423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.532326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.541623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.541997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.542041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.542057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.542292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.542495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.542515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.542528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.545447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.554720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.555072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.555100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.555115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.555348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.555536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.555555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.555574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.558499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.567712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.568016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.568043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.568058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.568269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.568473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.568492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.568505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.571407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.580818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.581224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.581252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.581267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.581503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.581707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.581727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.581740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.584640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.593936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.594276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.594304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.594320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.594555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.594757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.594778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.594790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.597667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.606961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.607258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.607299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.607314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.607532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.607735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.143 [2024-07-15 10:40:47.607755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.143 [2024-07-15 10:40:47.607768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.143 [2024-07-15 10:40:47.610685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.143 [2024-07-15 10:40:47.620046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.143 [2024-07-15 10:40:47.620449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.143 [2024-07-15 10:40:47.620477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.143 [2024-07-15 10:40:47.620493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.143 [2024-07-15 10:40:47.620727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.143 [2024-07-15 10:40:47.620958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.144 [2024-07-15 10:40:47.620980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.144 [2024-07-15 10:40:47.620993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.144 [2024-07-15 10:40:47.623867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.144 [2024-07-15 10:40:47.633136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.144 [2024-07-15 10:40:47.633538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.144 [2024-07-15 10:40:47.633566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.144 [2024-07-15 10:40:47.633582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.144 [2024-07-15 10:40:47.633828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.144 [2024-07-15 10:40:47.634042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.144 [2024-07-15 10:40:47.634063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.144 [2024-07-15 10:40:47.634076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.144 [2024-07-15 10:40:47.636967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.144 [2024-07-15 10:40:47.646284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.144 [2024-07-15 10:40:47.646736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.144 [2024-07-15 10:40:47.646788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.144 [2024-07-15 10:40:47.646812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.144 [2024-07-15 10:40:47.647068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.144 [2024-07-15 10:40:47.647287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.144 [2024-07-15 10:40:47.647307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.144 [2024-07-15 10:40:47.647320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.144 [2024-07-15 10:40:47.650080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.144 [2024-07-15 10:40:47.659383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.144 [2024-07-15 10:40:47.659825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.144 [2024-07-15 10:40:47.659873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.144 [2024-07-15 10:40:47.659888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.144 [2024-07-15 10:40:47.660133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.144 [2024-07-15 10:40:47.660320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.144 [2024-07-15 10:40:47.660340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.144 [2024-07-15 10:40:47.660353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.144 [2024-07-15 10:40:47.663154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.144 [2024-07-15 10:40:47.672456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.144 [2024-07-15 10:40:47.672863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.144 [2024-07-15 10:40:47.672890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.144 [2024-07-15 10:40:47.672906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.144 [2024-07-15 10:40:47.673141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.144 [2024-07-15 10:40:47.673343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.144 [2024-07-15 10:40:47.673363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.144 [2024-07-15 10:40:47.673376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.144 [2024-07-15 10:40:47.676338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.144 [2024-07-15 10:40:47.685582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.144 [2024-07-15 10:40:47.685993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.144 [2024-07-15 10:40:47.686021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.144 [2024-07-15 10:40:47.686036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.144 [2024-07-15 10:40:47.686265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.144 [2024-07-15 10:40:47.686468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.144 [2024-07-15 10:40:47.686488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.144 [2024-07-15 10:40:47.686504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.144 [2024-07-15 10:40:47.689836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.403 [2024-07-15 10:40:47.699043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.403 [2024-07-15 10:40:47.699411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.403 [2024-07-15 10:40:47.699440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.403 [2024-07-15 10:40:47.699456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.403 [2024-07-15 10:40:47.699692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.403 [2024-07-15 10:40:47.699941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.403 [2024-07-15 10:40:47.699961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.403 [2024-07-15 10:40:47.699974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.403 [2024-07-15 10:40:47.702856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.403 [2024-07-15 10:40:47.712253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.403 [2024-07-15 10:40:47.712639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.403 [2024-07-15 10:40:47.712669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.403 [2024-07-15 10:40:47.712685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.403 [2024-07-15 10:40:47.712944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.403 [2024-07-15 10:40:47.713173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.403 [2024-07-15 10:40:47.713207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.403 [2024-07-15 10:40:47.713220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.403 [2024-07-15 10:40:47.716082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.403 [2024-07-15 10:40:47.725571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.403 [2024-07-15 10:40:47.725993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.726021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.726040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.726291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.726494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.726512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.726525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.729738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.738994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.739456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.739491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.739509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.739752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.739992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.740014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.740027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.742942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.752305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.752647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.752675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.752690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.752955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.753149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.753183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.753196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.756063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.765471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.765780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.765816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.765848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.766087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.766292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.766312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.766325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.769169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.778619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.778989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.779016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.779031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.779257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.779464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.779484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.779497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.782451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.791850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.792245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.792274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.792291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.792544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.792732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.792751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.792764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.795686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.804944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.805233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.805275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.805290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.805487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.805707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.805727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.805740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.808620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.818192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.818534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.818561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.818577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.818823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.819017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.819037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.819050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.821926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.831242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.831549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.831577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.831592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.831823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.832022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.832043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.832058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.834951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.844502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.844858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.844886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.844902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.845141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.845343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.845363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.845376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.848300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.857480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.857951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.404 [2024-07-15 10:40:47.857979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.404 [2024-07-15 10:40:47.857996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.404 [2024-07-15 10:40:47.858238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.404 [2024-07-15 10:40:47.858440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.404 [2024-07-15 10:40:47.858460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.404 [2024-07-15 10:40:47.858472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.404 [2024-07-15 10:40:47.861357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.404 [2024-07-15 10:40:47.870600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.404 [2024-07-15 10:40:47.871011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.871039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.871059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.871295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.871498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.871518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.871531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.405 [2024-07-15 10:40:47.874437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.405 [2024-07-15 10:40:47.883769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.405 [2024-07-15 10:40:47.884119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.884147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.884163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.884398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.884600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.884620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.884633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.405 [2024-07-15 10:40:47.887522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.405 [2024-07-15 10:40:47.896735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.405 [2024-07-15 10:40:47.897086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.897113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.897129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.897358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.897560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.897580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.897593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.405 [2024-07-15 10:40:47.900499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.405 [2024-07-15 10:40:47.909786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.405 [2024-07-15 10:40:47.910132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.910159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.910175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.910391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.910593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.910617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.910630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.405 [2024-07-15 10:40:47.913618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.405 [2024-07-15 10:40:47.922872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.405 [2024-07-15 10:40:47.923225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.923267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.923282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.923499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.923703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.923722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.923734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.405 [2024-07-15 10:40:47.926676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.405 [2024-07-15 10:40:47.935929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.405 [2024-07-15 10:40:47.936334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.936362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.936377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.936611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.936809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.936844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.936857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.405 [2024-07-15 10:40:47.939636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.405 [2024-07-15 10:40:47.949378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.405 [2024-07-15 10:40:47.949676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.405 [2024-07-15 10:40:47.949718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.405 [2024-07-15 10:40:47.949734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.405 [2024-07-15 10:40:47.950016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.405 [2024-07-15 10:40:47.950263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.405 [2024-07-15 10:40:47.950295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.405 [2024-07-15 10:40:47.950314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.663 [2024-07-15 10:40:47.953608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.663 [2024-07-15 10:40:47.962625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.663 [2024-07-15 10:40:47.963000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.663 [2024-07-15 10:40:47.963030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.663 [2024-07-15 10:40:47.963047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.663 [2024-07-15 10:40:47.963297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.663 [2024-07-15 10:40:47.963500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.663 [2024-07-15 10:40:47.963520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.663 [2024-07-15 10:40:47.963532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.663 [2024-07-15 10:40:47.966376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.663 [2024-07-15 10:40:47.975690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.663 [2024-07-15 10:40:47.976063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.663 [2024-07-15 10:40:47.976093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.663 [2024-07-15 10:40:47.976110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:47.976360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:47.976565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:47.976584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:47.976597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:47.979542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:47.989022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:47.989459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:47.989490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:47.989507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:47.989748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:47.990005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:47.990027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:47.990040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:47.992957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.002086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.002438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.002480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.002496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.002718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.002967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.002990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.003004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.005893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.015326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.015731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.015759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.015775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.016039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.016261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.016280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.016293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.019169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.028574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.028959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.028988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.029005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.029255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.029461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.029481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.029494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.032430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.041705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.042074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.042103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.042136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.042372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.042599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.042619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.042637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.045866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.055307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.055728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.055757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.055773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.055997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.056236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.056256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.056269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.059558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.068910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.069255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.069283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.069299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.069502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.069716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.069737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.069750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.072992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.082588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.082903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.082933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.082950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.083182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.083398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.083418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.083432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.086679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.096151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.096579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.096608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.096625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.096882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.097101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.097123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.097137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.100459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.109759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.110109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.110138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.110155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.664 [2024-07-15 10:40:48.110385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.664 [2024-07-15 10:40:48.110601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.664 [2024-07-15 10:40:48.110621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.664 [2024-07-15 10:40:48.110634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.664 [2024-07-15 10:40:48.113831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.664 [2024-07-15 10:40:48.123069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.664 [2024-07-15 10:40:48.123428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.664 [2024-07-15 10:40:48.123455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.664 [2024-07-15 10:40:48.123470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.123686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.123936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.123958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.123973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.127012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.665 [2024-07-15 10:40:48.136367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.665 [2024-07-15 10:40:48.136710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.665 [2024-07-15 10:40:48.136738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.665 [2024-07-15 10:40:48.136754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.137021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.137256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.137275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.137288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.140283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.665 [2024-07-15 10:40:48.149754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.665 [2024-07-15 10:40:48.150135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.665 [2024-07-15 10:40:48.150183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.665 [2024-07-15 10:40:48.150200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.150439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.150633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.150653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.150667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.153951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.665 [2024-07-15 10:40:48.163077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.665 [2024-07-15 10:40:48.163487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.665 [2024-07-15 10:40:48.163524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.665 [2024-07-15 10:40:48.163557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.163786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.164019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.164053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.164067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.167100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.665 [2024-07-15 10:40:48.176364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.665 [2024-07-15 10:40:48.176717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.665 [2024-07-15 10:40:48.176765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.665 [2024-07-15 10:40:48.176781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.177043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.177271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.177290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.177302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.180289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.665 [2024-07-15 10:40:48.189604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.665 [2024-07-15 10:40:48.189992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.665 [2024-07-15 10:40:48.190021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.665 [2024-07-15 10:40:48.190037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.190295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.190498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.190516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.190529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.193492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.665 [2024-07-15 10:40:48.202741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.665 [2024-07-15 10:40:48.203164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.665 [2024-07-15 10:40:48.203192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.665 [2024-07-15 10:40:48.203207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.665 [2024-07-15 10:40:48.203443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.665 [2024-07-15 10:40:48.203631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.665 [2024-07-15 10:40:48.203650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.665 [2024-07-15 10:40:48.203662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.665 [2024-07-15 10:40:48.206586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.216128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.216514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.216546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.216563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.216793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.217047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.217070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.217084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.220174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.229272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.229680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.229714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.229730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.229990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.230205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.230225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.230237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.233459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.242404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.242775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.242809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.242826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.243078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.243284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.243303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.243315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.246140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.255643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.256048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.256091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.256107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.256347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.256545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.256565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.256593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.259732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.269019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.269468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.269510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.269526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.269767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.270007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.270028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.270041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.273109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.282318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.282626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.282666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.282682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.282944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.283192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.283210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.283223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.286160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.295573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.923 [2024-07-15 10:40:48.295973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.923 [2024-07-15 10:40:48.296009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.923 [2024-07-15 10:40:48.296042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.923 [2024-07-15 10:40:48.296282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.923 [2024-07-15 10:40:48.296475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.923 [2024-07-15 10:40:48.296494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.923 [2024-07-15 10:40:48.296506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.923 [2024-07-15 10:40:48.299369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.923 [2024-07-15 10:40:48.308549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.308973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.308999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.309031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.309272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.309480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.309499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.309511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.312422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.321775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.322215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.322242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.322258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.322479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.322705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.322723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.322735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.325706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.334909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.335300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.335326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.335342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.335564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.335789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.335831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.335845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.338746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.347951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.348281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.348308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.348324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.348546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.348754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.348773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.348799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.351732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.361073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.361433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.361460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.361480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.361716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.361953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.361974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.361987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.364891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.374171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.374533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.374559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.374574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.374790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.375026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.375045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.375058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.377857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.387193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.387587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.387614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.387630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.387863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.388068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.388087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.388100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.391013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.400313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.400676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.400703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.400719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.400983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.401218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.401240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.401253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.404049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.413392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.413879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.413920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.413936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.414183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.414375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.414393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.414405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.417232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.426411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.426773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.426822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.426839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.427107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.427300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.427318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.427330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.924 [2024-07-15 10:40:48.430154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.924 [2024-07-15 10:40:48.439490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.924 [2024-07-15 10:40:48.439824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.924 [2024-07-15 10:40:48.439851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.924 [2024-07-15 10:40:48.439881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.924 [2024-07-15 10:40:48.440124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.924 [2024-07-15 10:40:48.440334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.924 [2024-07-15 10:40:48.440352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.924 [2024-07-15 10:40:48.440364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.925 [2024-07-15 10:40:48.443292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.925 [2024-07-15 10:40:48.452600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.925 [2024-07-15 10:40:48.452971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.925 [2024-07-15 10:40:48.452998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.925 [2024-07-15 10:40:48.453014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.925 [2024-07-15 10:40:48.453249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.925 [2024-07-15 10:40:48.453458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.925 [2024-07-15 10:40:48.453476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.925 [2024-07-15 10:40:48.453488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.925 [2024-07-15 10:40:48.456421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.925 [2024-07-15 10:40:48.465622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.925 [2024-07-15 10:40:48.466022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.925 [2024-07-15 10:40:48.466048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:23:59.925 [2024-07-15 10:40:48.466064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:23:59.925 [2024-07-15 10:40:48.466285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:23:59.925 [2024-07-15 10:40:48.466493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.925 [2024-07-15 10:40:48.466511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.925 [2024-07-15 10:40:48.466523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.925 [2024-07-15 10:40:48.469574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.182 [2024-07-15 10:40:48.479030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.182 [2024-07-15 10:40:48.479482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.182 [2024-07-15 10:40:48.479525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.182 [2024-07-15 10:40:48.479542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.182 [2024-07-15 10:40:48.479782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.182 [2024-07-15 10:40:48.480024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.182 [2024-07-15 10:40:48.480044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.182 [2024-07-15 10:40:48.480058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.182 [2024-07-15 10:40:48.483281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.182 [2024-07-15 10:40:48.492212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.182 [2024-07-15 10:40:48.492528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.182 [2024-07-15 10:40:48.492555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.182 [2024-07-15 10:40:48.492570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.182 [2024-07-15 10:40:48.492791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.182 [2024-07-15 10:40:48.492997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.182 [2024-07-15 10:40:48.493016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.182 [2024-07-15 10:40:48.493029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.182 [2024-07-15 10:40:48.495997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.182 [2024-07-15 10:40:48.505351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.505780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.505831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.505848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.506079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.506305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.506323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.506335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.509285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.518536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.518966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.518994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.519010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.519252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.519460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.519479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.519491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.522405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.531586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.531958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.532001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.532017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.532283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.532477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.532494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.532511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.535407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.544728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.545243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.545284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.545301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.545551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.545743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.545760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.545772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.548704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.557787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.558158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.558185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.558200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.558437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.558645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.558663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.558676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.561607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.570916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.571307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.571334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.571349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.571584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.571793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.571836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.571849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.574752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.584079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.584413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.584439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.584455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.584675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.584913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.584934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.584947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.587864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.597245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.597673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.597714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.597731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.597982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.598213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.598232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.598244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.601139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.610226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.610651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.610692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.610709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.610935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.611182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.611200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.611212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.614096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.623276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.623763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.623812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.623830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.624080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.624294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.624313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.624325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.627104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.636366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.636842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.636892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.636908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.637171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.637363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.637381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.637393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.640215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.649433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.649797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.649845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.649861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.650128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.650321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.650339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.650352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.653181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.662520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.662896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.662922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.662937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.663155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.663364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.663382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.663394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.666370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.675692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.676061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.676089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.676105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.183 [2024-07-15 10:40:48.676334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.183 [2024-07-15 10:40:48.676542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.183 [2024-07-15 10:40:48.676560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.183 [2024-07-15 10:40:48.676572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.183 [2024-07-15 10:40:48.679509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.183 [2024-07-15 10:40:48.688756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.183 [2024-07-15 10:40:48.689176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.183 [2024-07-15 10:40:48.689229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.183 [2024-07-15 10:40:48.689244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.184 [2024-07-15 10:40:48.689505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.184 [2024-07-15 10:40:48.689697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.184 [2024-07-15 10:40:48.689715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.184 [2024-07-15 10:40:48.689728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.184 [2024-07-15 10:40:48.692550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.184 [2024-07-15 10:40:48.701765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.184 [2024-07-15 10:40:48.702195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.184 [2024-07-15 10:40:48.702236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.184 [2024-07-15 10:40:48.702253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.184 [2024-07-15 10:40:48.702493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.184 [2024-07-15 10:40:48.702702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.184 [2024-07-15 10:40:48.702720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.184 [2024-07-15 10:40:48.702733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.184 [2024-07-15 10:40:48.705657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1300193 Killed "${NVMF_APP[@]}" "$@" 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1301150 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1301150 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1301150 ']' 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.184 10:40:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.184 [2024-07-15 10:40:48.715218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.184 [2024-07-15 10:40:48.715590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.184 [2024-07-15 10:40:48.715618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.184 [2024-07-15 10:40:48.715634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.184 [2024-07-15 10:40:48.715858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.184 [2024-07-15 10:40:48.716076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.184 [2024-07-15 10:40:48.716110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.184 [2024-07-15 10:40:48.716123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.184 [2024-07-15 10:40:48.719236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.184 [2024-07-15 10:40:48.728630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.184 [2024-07-15 10:40:48.729005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.184 [2024-07-15 10:40:48.729036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.184 [2024-07-15 10:40:48.729053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.184 [2024-07-15 10:40:48.729284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.184 [2024-07-15 10:40:48.729499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.184 [2024-07-15 10:40:48.729518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.184 [2024-07-15 10:40:48.729531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.442 [2024-07-15 10:40:48.733041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.442 [2024-07-15 10:40:48.742043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.442 [2024-07-15 10:40:48.742447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.442 [2024-07-15 10:40:48.742476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.442 [2024-07-15 10:40:48.742497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.442 [2024-07-15 10:40:48.742723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.442 [2024-07-15 10:40:48.742973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.442 [2024-07-15 10:40:48.742994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.442 [2024-07-15 10:40:48.743007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.442 [2024-07-15 10:40:48.746048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.442 [2024-07-15 10:40:48.754407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:00.442 [2024-07-15 10:40:48.754477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.442 [2024-07-15 10:40:48.755452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.442 [2024-07-15 10:40:48.755830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.442 [2024-07-15 10:40:48.755858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.442 [2024-07-15 10:40:48.755874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.442 [2024-07-15 10:40:48.756109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.442 [2024-07-15 10:40:48.756324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.442 [2024-07-15 10:40:48.756344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.442 [2024-07-15 10:40:48.756357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.442 [2024-07-15 10:40:48.759371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.442 [2024-07-15 10:40:48.768721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.442 [2024-07-15 10:40:48.769084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.442 [2024-07-15 10:40:48.769111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.442 [2024-07-15 10:40:48.769127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.442 [2024-07-15 10:40:48.769358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.442 [2024-07-15 10:40:48.769572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.442 [2024-07-15 10:40:48.769591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.442 [2024-07-15 10:40:48.769604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.442 [2024-07-15 10:40:48.772625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.442 [2024-07-15 10:40:48.781915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.442 [2024-07-15 10:40:48.782371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.442 [2024-07-15 10:40:48.782399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.442 [2024-07-15 10:40:48.782415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.442 [2024-07-15 10:40:48.782661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.442 [2024-07-15 10:40:48.782904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.442 [2024-07-15 10:40:48.782925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.442 [2024-07-15 10:40:48.782939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.442 [2024-07-15 10:40:48.785983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.442 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.442 [2024-07-15 10:40:48.795395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.442 [2024-07-15 10:40:48.795812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.442 [2024-07-15 10:40:48.795841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.442 [2024-07-15 10:40:48.795857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.442 [2024-07-15 10:40:48.796088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.442 [2024-07-15 10:40:48.796304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.442 [2024-07-15 10:40:48.796323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.442 [2024-07-15 10:40:48.796335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.442 [2024-07-15 10:40:48.799399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.442 [2024-07-15 10:40:48.808752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.442 [2024-07-15 10:40:48.809111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.442 [2024-07-15 10:40:48.809139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.809154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.809381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.809595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.809614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.809627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.812615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.819309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.443 [2024-07-15 10:40:48.822026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.822354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.822382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.822398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.822628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.822851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.822876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.822890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.825896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.835265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.835753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.835788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.835830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.836071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.836291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.836311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.836326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.839311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.848530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.848938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.848967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.848984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.849213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.849427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.849447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.849459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.852488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.861753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.862154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.862197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.862213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.862450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.862649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.862669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.862681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.865668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.875012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.875424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.875469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.875488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.875765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.875997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.876019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.876035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.879062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.888403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.888822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.888856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.888875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.889126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.889342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.889362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.889377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.892429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.901642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.902057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.902086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.902103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.902333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.902548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.902567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.902580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.905576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.914926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.915284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.915327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.915357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.915632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.915857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.915877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.915890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.918901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.923648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.443 [2024-07-15 10:40:48.923677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.443 [2024-07-15 10:40:48.923689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.443 [2024-07-15 10:40:48.923714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.443 [2024-07-15 10:40:48.923724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.443 [2024-07-15 10:40:48.923950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.443 [2024-07-15 10:40:48.923976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.443 [2024-07-15 10:40:48.923979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.443 [2024-07-15 10:40:48.928523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.928910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.928941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.928958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.443 [2024-07-15 10:40:48.929178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.443 [2024-07-15 10:40:48.929406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.443 [2024-07-15 10:40:48.929427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.443 [2024-07-15 10:40:48.929442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.443 [2024-07-15 10:40:48.932629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.443 [2024-07-15 10:40:48.942047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.443 [2024-07-15 10:40:48.942547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.443 [2024-07-15 10:40:48.942583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.443 [2024-07-15 10:40:48.942603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.444 [2024-07-15 10:40:48.942851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.444 [2024-07-15 10:40:48.943068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.444 [2024-07-15 10:40:48.943089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.444 [2024-07-15 10:40:48.943106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.444 [2024-07-15 10:40:48.946354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.444 [2024-07-15 10:40:48.955621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.444 [2024-07-15 10:40:48.956131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.444 [2024-07-15 10:40:48.956171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.444 [2024-07-15 10:40:48.956191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.444 [2024-07-15 10:40:48.956429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.444 [2024-07-15 10:40:48.956645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.444 [2024-07-15 10:40:48.956666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.444 [2024-07-15 10:40:48.956683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.444 [2024-07-15 10:40:48.959907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.444 [2024-07-15 10:40:48.969309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.444 [2024-07-15 10:40:48.969755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.444 [2024-07-15 10:40:48.969793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.444 [2024-07-15 10:40:48.969821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.444 [2024-07-15 10:40:48.970046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.444 [2024-07-15 10:40:48.970280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.444 [2024-07-15 10:40:48.970301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.444 [2024-07-15 10:40:48.970317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.444 [2024-07-15 10:40:48.973485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.444 [2024-07-15 10:40:48.982914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.444 [2024-07-15 10:40:48.983368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.444 [2024-07-15 10:40:48.983405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.444 [2024-07-15 10:40:48.983424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.444 [2024-07-15 10:40:48.983671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.444 [2024-07-15 10:40:48.983895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.444 [2024-07-15 10:40:48.983916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.444 [2024-07-15 10:40:48.983933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.444 [2024-07-15 10:40:48.987307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 [2024-07-15 10:40:48.996640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:48.997123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:48.997162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:48.997190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:48.997431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:48.997646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:48.997667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:48.997684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.000896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 [2024-07-15 10:40:49.010163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.010521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.010549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.010566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.010780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.011006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.011027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.011041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.014223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 [2024-07-15 10:40:49.023810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.024166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.024194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.024211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.024426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.024643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.024664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.024678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.027933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.702 [2024-07-15 10:40:49.037344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.037685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.037714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.037730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.037960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.038192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.038213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.038235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.041478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 [2024-07-15 10:40:49.050823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.702 [2024-07-15 10:40:49.051240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.051268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.051284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.702 [2024-07-15 10:40:49.051515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.051727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.051748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.051761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.053208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.702 [2024-07-15 10:40:49.055067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.702 [2024-07-15 10:40:49.064448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.064820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.064848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.064864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.065079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.065304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.065324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.065337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.068512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 [2024-07-15 10:40:49.077940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.078287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.078329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.078344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.078587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.078798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.078827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.078840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.082037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 [2024-07-15 10:40:49.091450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.091942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.091983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.092003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.092249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.092465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.092487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.092504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.095773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 Malloc0 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.702 [2024-07-15 10:40:49.105075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.105434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.105463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.105479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.105709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.105951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.105972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.105986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.702 [2024-07-15 10:40:49.109280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.702 [2024-07-15 10:40:49.118642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.118989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.702 [2024-07-15 10:40:49.119017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4ac0 with addr=10.0.0.2, port=4420 00:24:00.702 [2024-07-15 10:40:49.119033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4ac0 is same with the state(5) to be set 00:24:00.702 [2024-07-15 10:40:49.119210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.702 [2024-07-15 10:40:49.119262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4ac0 (9): Bad file descriptor 00:24:00.702 [2024-07-15 10:40:49.119472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.702 [2024-07-15 10:40:49.119492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.702 [2024-07-15 10:40:49.119505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.702 [2024-07-15 10:40:49.122700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.702 10:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1300482 00:24:00.702 [2024-07-15 10:40:49.132226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.702 [2024-07-15 10:40:49.167951] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:10.667 00:24:10.667 Latency(us) 00:24:10.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.667 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:10.667 Verification LBA range: start 0x0 length 0x4000 00:24:10.667 Nvme1n1 : 15.00 6770.70 26.45 10079.60 0.00 7573.43 843.47 18447.17 00:24:10.667 =================================================================================================================== 00:24:10.667 Total : 6770.70 26.45 10079.60 0.00 7573.43 843.47 18447.17 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:10.667 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.668 rmmod nvme_tcp 00:24:10.668 rmmod nvme_fabrics 00:24:10.668 rmmod nvme_keyring 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1301150 ']' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1301150 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1301150 ']' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1301150 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1301150 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1301150' 00:24:10.668 killing process with pid 1301150 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1301150 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1301150 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.668 10:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.585 10:41:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.585 00:24:12.585 real 0m22.463s 00:24:12.585 user 0m59.960s 00:24:12.585 sys 0m4.259s 00:24:12.585 10:41:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.585 10:41:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:12.585 ************************************ 00:24:12.585 END TEST nvmf_bdevperf 00:24:12.585 ************************************ 00:24:12.585 10:41:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:12.585 10:41:00 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:12.585 10:41:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.585 10:41:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.585 10:41:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.585 ************************************ 00:24:12.585 START TEST nvmf_target_disconnect 00:24:12.585 ************************************ 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:12.585 * Looking for test storage... 00:24:12.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.585 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.586 10:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:14.495 10:41:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.495 10:41:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.495 10:41:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.495 10:41:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.495 10:41:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.495 10:41:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:14.495 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:14.495 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:14.495 Found net devices under 0000:09:00.0: cvl_0_0 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:14.495 Found net devices under 0000:09:00.1: cvl_0_1 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.495 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:24:14.755 00:24:14.755 --- 10.0.0.2 ping statistics --- 00:24:14.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.755 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:24:14.755 00:24:14.755 --- 10.0.0.1 ping statistics --- 00:24:14.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.755 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:14.755 ************************************ 00:24:14.755 START TEST nvmf_target_disconnect_tc1 00:24:14.755 ************************************ 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:14.755 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.755 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.755 [2024-07-15 10:41:03.278975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.755 [2024-07-15 10:41:03.279041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22551a0 with addr=10.0.0.2, port=4420 00:24:14.755 [2024-07-15 10:41:03.279094] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:14.755 [2024-07-15 10:41:03.279125] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:14.755 [2024-07-15 10:41:03.279139] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:14.755 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:14.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:14.755 Initializing NVMe Controllers 00:24:14.756 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:14.756 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.756 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.756 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.756 00:24:14.756 real 0m0.091s 00:24:14.756 user 0m0.038s 00:24:14.756 sys 0m0.053s 00:24:14.756 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.756 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.756 ************************************ 00:24:14.756 END TEST nvmf_target_disconnect_tc1 00:24:14.756 ************************************ 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 ************************************ 00:24:15.014 START TEST nvmf_target_disconnect_tc2 00:24:15.014 ************************************ 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1304301 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1304301 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1304301 ']' 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.014 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 [2024-07-15 10:41:03.396490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:15.014 [2024-07-15 10:41:03.396576] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.014 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.014 [2024-07-15 10:41:03.463329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:15.271 [2024-07-15 10:41:03.571156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.271 [2024-07-15 10:41:03.571201] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.271 [2024-07-15 10:41:03.571214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.271 [2024-07-15 10:41:03.571225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.271 [2024-07-15 10:41:03.571236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.272 [2024-07-15 10:41:03.571319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:15.272 [2024-07-15 10:41:03.571361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:15.272 [2024-07-15 10:41:03.571416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:15.272 [2024-07-15 10:41:03.571419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 Malloc0 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 [2024-07-15 10:41:03.735900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 [2024-07-15 10:41:03.764110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1304333 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.272 10:41:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:15.272 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.820 10:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1304301 00:24:17.820 10:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 [2024-07-15 10:41:05.787756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 [2024-07-15 10:41:05.788135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Write completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.820 Read completed with error (sct=0, sc=8) 00:24:17.820 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 [2024-07-15 10:41:05.788444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Read completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 Write completed with error (sct=0, sc=8) 00:24:17.821 starting I/O failed 00:24:17.821 [2024-07-15 10:41:05.788737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:17.821 [2024-07-15 10:41:05.788917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.788958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.789869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.789910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.790934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.790960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.791951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.791977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.792055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.821 [2024-07-15 10:41:05.792082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.821 qpair failed and we were unable to recover it. 00:24:17.821 [2024-07-15 10:41:05.792217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.792243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.792339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.792367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.792486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.792512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.792608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.792634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.792751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.792778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.792885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.792911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.792999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.793886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.793982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.794873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.794899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.795866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.795895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.796896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.796977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.797007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.797132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.797297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.797323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.797425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.797476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.797586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.797612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.822 [2024-07-15 10:41:05.797699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.822 [2024-07-15 10:41:05.797726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.822 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.797821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.797848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.797941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.797971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.798919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.798958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.799916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.799943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.800922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.800949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.801942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.801971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.802910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.802936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.803018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.803043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.803154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.803179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.803266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.823 [2024-07-15 10:41:05.803293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.823 qpair failed and we were unable to recover it. 00:24:17.823 [2024-07-15 10:41:05.803380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.803418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.803556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.803583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.803695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.803732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.803858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.803885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.803971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.803997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.804110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.804139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.804256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.804282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.804373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.804400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.804544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.804570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.804696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.804735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.804853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.804893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.805040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.805196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.805304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.805525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.805687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.805842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.805981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.806126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.806293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.806458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.806605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.806724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.806883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.806911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.807906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.807933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.808027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.808053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.808164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.824 [2024-07-15 10:41:05.808191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.824 qpair failed and we were unable to recover it. 00:24:17.824 [2024-07-15 10:41:05.808327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.808353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.808465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.808497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.808596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.808622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.808773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.808811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.808928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.808954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.809934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.809961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.810888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.810915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.811955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.811982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.812108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.812139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.812244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.812270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.812385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.812411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.812562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.812589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.812717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.812757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.812900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.812939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.825 qpair failed and we were unable to recover it. 00:24:17.825 [2024-07-15 10:41:05.813875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.825 [2024-07-15 10:41:05.813901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.814907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.814985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.815125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.815292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.815404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.815542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.815719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.815861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.815889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.816923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.816952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.817903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.817989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.818968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.818994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.819074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.819101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.819212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.819240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.819331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.826 [2024-07-15 10:41:05.819358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.826 qpair failed and we were unable to recover it. 00:24:17.826 [2024-07-15 10:41:05.819484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.819513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.819666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.819705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.819833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.819874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.819977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.820918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.820944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.821927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.821952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.822933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.822959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.823945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.823972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.827 [2024-07-15 10:41:05.824895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.827 qpair failed and we were unable to recover it. 00:24:17.827 [2024-07-15 10:41:05.824984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.825010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.825103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.825342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.825396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.825538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.825568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.825712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.825740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.825859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.825888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.826948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.826974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.827876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.827903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.828942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.828968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.829928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.829955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.830050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.830076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.828 qpair failed and we were unable to recover it. 00:24:17.828 [2024-07-15 10:41:05.830153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.828 [2024-07-15 10:41:05.830179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.830322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.830349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.830440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.830475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.830598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.830624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.830739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.830765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.830881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.830907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.831941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.831967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.832890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.832920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.833924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.833951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.834961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.834988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.829 [2024-07-15 10:41:05.835103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.829 [2024-07-15 10:41:05.835130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.829 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.835245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.835272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.835398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.835437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.835563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.835591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.835708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.835736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.835880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.835907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.835999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.836156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.836341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.836480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.836623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.836762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.836906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.836932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.837886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.837914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.838054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.838275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.838512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.838633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.838760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.838909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.838991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.839942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.839974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.840061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.840087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.840194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.840220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.840307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.840334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.830 qpair failed and we were unable to recover it. 00:24:17.830 [2024-07-15 10:41:05.840452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.830 [2024-07-15 10:41:05.840479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.840594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.840621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.840710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.840738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.840849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.840889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.841945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.841972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.842880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.842906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.843942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.843968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.844898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.844927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.845015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.845043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.845255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.845284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.845464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.845516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.845655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.845681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.845763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.845789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.845948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.845988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.846080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.846107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.846190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.831 [2024-07-15 10:41:05.846216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.831 qpair failed and we were unable to recover it. 00:24:17.831 [2024-07-15 10:41:05.846312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.846338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.846414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.846440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.846577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.846603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.846710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.846736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.846828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.846856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.846962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.847127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.847289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.847459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.847617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.847739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.847917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.847944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.848855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.848884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.849955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.849983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.850878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.850997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.832 [2024-07-15 10:41:05.851780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.832 qpair failed and we were unable to recover it. 00:24:17.832 [2024-07-15 10:41:05.851908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.851937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.852897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.852983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.853947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.853975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.854092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.854118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.854203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.854230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.854339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.854365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.854516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.854571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.854675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.854700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.854843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.854883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.855859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.855885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.856015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.856055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.856139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.856167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.856330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.856386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.856471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.856498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.833 [2024-07-15 10:41:05.856586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.833 [2024-07-15 10:41:05.856612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.833 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.856707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.856747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.856872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.856901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.857930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.857958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.858923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.858950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.859945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.859971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.860890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.860917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.861057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.861266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.861474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.861642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.861743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.861869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.861982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.862008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.862175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.862232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.862383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.862439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.834 qpair failed and we were unable to recover it. 00:24:17.834 [2024-07-15 10:41:05.862575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.834 [2024-07-15 10:41:05.862601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.862714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.862744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.862870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.862910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.863954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.863980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.864957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.864984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.865096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.865123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.865235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.865262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.865397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.865449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.865588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.865640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.865758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.865789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.865916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.865943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.866904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.866931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.867865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.867906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.868034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.868074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.868267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.868294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.868377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.868403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.835 qpair failed and we were unable to recover it. 00:24:17.835 [2024-07-15 10:41:05.868496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.835 [2024-07-15 10:41:05.868522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.868610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.868637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.868752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.868779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.868876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.868906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.869023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.869050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.869164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.869191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.869368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.869422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.869610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.869637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.869754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.869781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.869887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.869914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.870906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.870985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.871945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.871971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.872903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.872988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.873126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.873244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.873386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.873528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.873714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.873875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.873903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.874015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.836 [2024-07-15 10:41:05.874041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.836 qpair failed and we were unable to recover it. 00:24:17.836 [2024-07-15 10:41:05.874136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.874162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.874297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.874344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.874485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.874539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.874621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.874649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.874764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.874792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.874912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.874939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.875053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.875079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.875239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.875292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.875464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.875513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.875632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.875658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.875753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.875779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.875908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.875947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.876097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.876238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.876471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.876617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.876732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.876877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.876998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.877938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.877964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.878925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.878952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.879068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.837 [2024-07-15 10:41:05.879116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.837 qpair failed and we were unable to recover it. 00:24:17.837 [2024-07-15 10:41:05.879206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.879233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.879351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.879379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.879489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.879517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.879608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.879635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.879717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.879743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.879843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.879869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.879974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.880916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.880943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.881889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.881917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.882898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.882989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.883156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.883335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.883499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.883652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.883796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.883913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.883939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.884022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.884048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.884183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.884209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.884397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.884432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.884569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.884596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.884707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.884735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.838 [2024-07-15 10:41:05.884827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.838 [2024-07-15 10:41:05.884855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.838 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.884965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.884992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.885843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.885870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.886876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.886903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.887924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.887950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.888899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.888926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.889862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.889984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.890011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.890098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.890123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.890235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.839 [2024-07-15 10:41:05.890261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.839 qpair failed and we were unable to recover it. 00:24:17.839 [2024-07-15 10:41:05.890370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.890398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.890477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.890502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.890586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.890612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.890716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.890741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.890824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.890856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.890994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.891132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.891242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.891411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.891566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.891716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.891888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.891916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.892960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.892987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.893124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.893150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.893233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.893261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.893488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.893538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.893626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.893652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.893761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.893788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.893911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.893937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.894964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.894991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.895125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.895170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.895276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.895324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.895455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.895506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.895614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.895640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.895755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.895782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.895884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.895910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.840 qpair failed and we were unable to recover it. 00:24:17.840 [2024-07-15 10:41:05.896016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.840 [2024-07-15 10:41:05.896042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.896925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.896951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.897041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.897068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.897199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.897254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.897395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.897442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.897575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.897622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.897728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.897755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.897874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.897901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.898946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.898974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.899968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.899996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.900109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.900242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.900354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.900545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.900688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.900857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.900973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.901000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.901086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.901113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.901225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.901251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.901366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.901394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.901503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.901529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.901637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.841 [2024-07-15 10:41:05.901664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.841 qpair failed and we were unable to recover it. 00:24:17.841 [2024-07-15 10:41:05.901742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.901769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.901862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.901888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.901997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.902902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.902928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.903935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.903962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.904963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.904990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.905895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.905922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.906033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.906060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.906150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.906176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.842 [2024-07-15 10:41:05.906318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-07-15 10:41:05.906345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.842 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.906459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.906485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.906603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.906628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.906711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.906737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.906829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.906855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.906961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.906987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.907889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.907916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.908026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.908051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.908232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.908267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.908446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.908500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.908590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.908615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.908737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.908777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.908906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.908935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.909897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.909974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.910111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.910242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.910404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.910602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.910810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.910924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.910951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.911898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.911978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.843 [2024-07-15 10:41:05.912004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.843 qpair failed and we were unable to recover it. 00:24:17.843 [2024-07-15 10:41:05.912132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.912180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.912283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.912316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.912444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.912472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.912580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.912606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.912714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.912740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.912851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.912878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.913852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.913891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.914967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.914995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.915921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.915950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.916882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.916911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.917076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.917187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.917302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.917442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.917605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.844 [2024-07-15 10:41:05.917745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.844 qpair failed and we were unable to recover it. 00:24:17.844 [2024-07-15 10:41:05.917864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.917894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.918845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.918970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.919968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.919996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.920922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.920955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.921866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.921984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.922928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.922954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.923058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.923085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.923199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.923225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.923343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.845 [2024-07-15 10:41:05.923369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.845 qpair failed and we were unable to recover it. 00:24:17.845 [2024-07-15 10:41:05.923452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.923478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.923576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.923604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.923731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.923771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.923869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.923897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.923983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.924926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.924952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.925881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.925908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.926949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.926976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.927946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.928054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.928080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.846 [2024-07-15 10:41:05.928186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.846 [2024-07-15 10:41:05.928212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.846 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.928355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.928381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.928491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.928518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.928598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.928625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.928741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.928768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.928851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.928877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.928981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.929941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.929967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.930962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.930989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.931944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.931970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.932903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.932929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.933041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.933066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.933176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.933201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.933316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.933342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.933447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.933473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.933580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.933606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.847 qpair failed and we were unable to recover it. 00:24:17.847 [2024-07-15 10:41:05.933690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.847 [2024-07-15 10:41:05.933718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.933830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.933856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.933970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.933996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.934889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.934974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.935826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.935973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.936935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.936961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.937886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.937981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.938883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.938995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.939020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.939105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.848 [2024-07-15 10:41:05.939133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.848 qpair failed and we were unable to recover it. 00:24:17.848 [2024-07-15 10:41:05.939222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.939250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.939391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.939417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.939556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.939583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.939670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.939699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.939818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.939846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.939929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.939957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.940959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.940989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.941951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.941978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.942111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.942276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.942398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.942510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.942671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.942831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.942974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.943971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.943997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.944101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.944262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.944377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.944493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.944632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.849 [2024-07-15 10:41:05.944812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.849 qpair failed and we were unable to recover it. 00:24:17.849 [2024-07-15 10:41:05.944901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.944927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.945877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.945974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.946093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.946258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.946446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.946610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.946774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.946922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.946949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.947061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.947087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.947196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.947222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.947363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.947388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.947499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.947527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.947667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.947706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.947833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.947861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.948936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.948963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.949096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.949122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.949214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.949240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.949322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.949348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.949432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.949460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.949601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.949627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.850 qpair failed and we were unable to recover it. 00:24:17.850 [2024-07-15 10:41:05.949735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.850 [2024-07-15 10:41:05.949761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.949855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.949882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.949991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.950929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.950957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.951902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.951930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.952889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.952917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.953876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.953980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.954140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.954288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.954420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.954564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.954712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.954895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.954935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.955059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.955087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.955172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.955199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.955372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.851 [2024-07-15 10:41:05.955421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.851 qpair failed and we were unable to recover it. 00:24:17.851 [2024-07-15 10:41:05.955638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.955693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.955785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.955818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.955929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.955960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.956960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.956986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.957966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.957992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.958897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.958992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.959139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.959313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.959494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.959632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.959770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.959955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.959982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.960870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.960897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.961008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.961034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.961143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.961170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.961261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.852 [2024-07-15 10:41:05.961287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.852 qpair failed and we were unable to recover it. 00:24:17.852 [2024-07-15 10:41:05.961395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.961421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.961533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.961561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.961695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.961735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.961864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.961892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.962864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.962893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.963971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.963998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.964925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.964952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.965964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.965991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.966880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.966993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.967018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.853 qpair failed and we were unable to recover it. 00:24:17.853 [2024-07-15 10:41:05.967125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.853 [2024-07-15 10:41:05.967151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.967928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.967953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.968931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.968957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.969880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.969907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.970863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.970890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.854 [2024-07-15 10:41:05.971811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.854 qpair failed and we were unable to recover it. 00:24:17.854 [2024-07-15 10:41:05.971921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.971947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.972848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.972874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.973853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.973974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.974920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.974946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.975940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.975967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.976115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.976220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.976382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.976517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.976660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.976843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.976972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.977000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.855 [2024-07-15 10:41:05.977168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.855 [2024-07-15 10:41:05.977233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.855 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.977379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.977445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.977635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.977662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.977750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.977776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.977894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.977920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.978967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.978997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.979117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.979144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.979264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.979291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.979437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.979494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.979604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.979630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.979713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.979740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.979878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.979905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.980927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.980953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.981939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.981965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.982104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.982252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.982458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.982623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.982759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.982901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.982991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.983018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.983097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.983124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.856 [2024-07-15 10:41:05.983344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.856 [2024-07-15 10:41:05.983397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.856 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.983504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.983570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.983707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.983732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.983822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.983849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.983931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.983957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.984073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.984102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.984186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.984213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.984307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.984334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.984550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.984604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.984718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.984744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.984873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.984913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.985892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.985976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.986888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.986916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.987951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.987978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.988887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.857 [2024-07-15 10:41:05.988995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.857 [2024-07-15 10:41:05.989022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.857 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.989972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.989998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.990914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.990941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.991960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.991986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.992897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.992924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.993064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.993210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.993350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.993558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.993688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.993858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.993994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.994020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.994135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.994161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.994278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.994303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.994416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.994444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.994572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.994600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.858 qpair failed and we were unable to recover it. 00:24:17.858 [2024-07-15 10:41:05.994687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.858 [2024-07-15 10:41:05.994714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.994824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.994851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.994992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.995893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.995933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.996857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.996896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.997881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.997909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.998972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.998999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.999084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.999110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.999204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.999231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.859 qpair failed and we were unable to recover it. 00:24:17.859 [2024-07-15 10:41:05.999337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.859 [2024-07-15 10:41:05.999363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:05.999476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:05.999502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:05.999590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:05.999616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:05.999726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:05.999752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:05.999842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:05.999869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:05.999985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.000972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.000998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.001864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.001890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.002973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.002999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.003911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.003937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.004048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.004074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.004211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.004237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.004354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.004379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.004524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.004550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.860 [2024-07-15 10:41:06.004663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.860 [2024-07-15 10:41:06.004689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.860 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.004816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.004843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.004930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.004957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.005876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.005982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.006851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.006974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.007002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.007120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.007146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.007306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.007359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.007577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.007631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.007721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.007747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.007838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.007864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.007974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.008914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.008938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.009879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.009903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.010016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.010042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.010191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.010216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.010333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.010358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.861 qpair failed and we were unable to recover it. 00:24:17.861 [2024-07-15 10:41:06.010467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.861 [2024-07-15 10:41:06.010492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.010597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.010632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.010752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.010779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.010904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.010930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.011881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.011922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.012892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.012921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.013968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.013995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.014115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.014281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.014455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.014578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.014713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.014861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.014973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.015139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.015283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.015459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.015634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.015805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.015926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.015961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.016085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.016112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.016204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.862 [2024-07-15 10:41:06.016231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.862 qpair failed and we were unable to recover it. 00:24:17.862 [2024-07-15 10:41:06.016348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.016374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.016494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.016523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.016609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.016635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.016772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.016806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.016921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.016951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.017973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.017999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.018901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.018939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.019907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.019996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.020956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.020982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.863 qpair failed and we were unable to recover it. 00:24:17.863 [2024-07-15 10:41:06.021109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.863 [2024-07-15 10:41:06.021134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.021278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.021394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.021510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.021622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.021760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.021869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.021976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.022900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.022927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.023879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.023905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.024069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.024203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.024323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.024492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.024632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.864 [2024-07-15 10:41:06.024741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.864 qpair failed and we were unable to recover it. 00:24:17.864 [2024-07-15 10:41:06.024858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.024886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.024990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.025948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.025976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.026880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.026907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.027894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.027921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.028902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.028929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.029964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.029990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.030071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.030096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.030213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.030239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.030322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.865 [2024-07-15 10:41:06.030358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.865 qpair failed and we were unable to recover it. 00:24:17.865 [2024-07-15 10:41:06.030447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.030473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.030571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.030597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.030704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.030730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.030828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.030855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.030946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.030972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.031915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.031942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.032840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.032976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.033954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.033980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.034898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.034975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.035001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.035121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.035147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.035270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.035297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.035385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.035412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.035553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.035580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.866 [2024-07-15 10:41:06.035663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.866 [2024-07-15 10:41:06.035690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.866 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.035806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.035833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.035959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.035985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.036959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.036990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.037137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.037282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.037419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.037568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.037672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.037838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.037982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.038946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.038973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.039875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.039902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.040855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.040995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.041021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.041149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.041188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.867 [2024-07-15 10:41:06.041306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.867 [2024-07-15 10:41:06.041334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.867 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.041456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.041482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.041562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.041600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.041742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.041768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.041987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.042054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.042355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.042430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.042649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.042712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.042951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.042978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.043064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.043119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.043311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.043386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.043676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.043740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.043947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.043974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.044087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.044113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.044323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.044386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.044669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.044732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.044921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.044947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.045063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.045089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.045175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.045235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.045476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.045538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.045684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.045710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.045819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.045846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.045986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.046012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.046226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.046252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.046517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.046580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.046875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.046902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.047014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.047040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.047209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.047271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.047556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.047619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.047874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.047900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.047976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.048002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.048083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.048108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.048294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.048367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.048616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.048683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.048930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.868 [2024-07-15 10:41:06.048957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.868 qpair failed and we were unable to recover it. 00:24:17.868 [2024-07-15 10:41:06.049053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.049079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.049257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.049320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.049608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.049681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.049918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.049944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.050054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.050080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.050256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.050319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.050615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.050678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.050929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.050956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.051134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.051198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.051472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.051544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.051749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.051826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.051990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.052016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.052162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.052226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.052524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.052589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.052883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.052909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.053000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.053027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.053165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.053195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.053373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.053436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.053737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.053814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.053976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.054002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.054159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.054222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.054539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.054613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.054868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.054894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.054985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.055011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.055114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.055177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.055398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.055461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.055691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.055757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.055950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.055977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.056140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.056204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.056493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.056523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.056727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.056790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.056982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.057008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.057206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.057272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.057445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.057519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.057743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.057832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.058099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.058166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.058460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.058523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.058764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.058847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.059102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.059167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.059451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.059515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.059825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.059890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.869 [2024-07-15 10:41:06.060129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.869 [2024-07-15 10:41:06.060193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.869 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.060440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.060504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.060771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.060851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.061107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.061170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.061465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.061539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.061836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.061902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.062148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.062211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.062505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.062569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.062839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.062902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.063114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.063177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.063397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.063463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.063748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.063833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.064102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.064165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.064407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.064470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.064727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.064789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.065093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.065156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.065455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.065518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.065733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.065795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.066054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.066118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.066406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.066470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.066728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.066818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.067085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.067148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.067435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.067499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.067702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.067764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.068042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.068107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.068399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.068463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.068690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.068753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.069004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.069068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.069369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.069441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.069651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.069715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.069995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.070059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.070340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.070403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.070678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.070741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.071052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.071127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.071339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.071402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.071689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.071752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.071990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.072054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.072337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.072400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.072690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.072753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.073035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.073106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.073355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.073428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.073726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.870 [2024-07-15 10:41:06.073789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.870 qpair failed and we were unable to recover it. 00:24:17.870 [2024-07-15 10:41:06.074110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.074173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.074439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.074502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.074781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.074861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.075116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.075179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.075385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.075448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.075692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.075754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.076009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.076071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.076328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.076391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.076669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.076732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.077040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.077104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.077363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.077429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.077733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.077797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.078068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.078141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.078350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.078413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.078665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.078739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.079053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.079128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.079405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.079469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.079696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.079759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.080010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.080074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.080372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.080435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.080691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.080754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.081092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.081155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.081385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.081449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.081685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.081748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.082032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.082096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.082378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.082440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.082682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.082747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.083015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.083080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.083382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.083446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.083693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.083756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.084035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.084099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.084381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.084444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.084722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.084784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.085063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.085126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.085327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.085392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.085677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.085749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.086052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.086116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.086404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.086476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.086764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.086855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.087105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.087169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.087466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.871 [2024-07-15 10:41:06.087529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.871 qpair failed and we were unable to recover it. 00:24:17.871 [2024-07-15 10:41:06.087820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.087894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.088141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.088211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.088520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.088592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.088874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.088940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.089193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.089258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.089481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.089544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.089841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.089904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.090195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.090258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.090507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.090570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.090781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.090859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.091098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.091160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.091453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.091820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.091895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.092180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.092241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.092552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.092622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.092915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.092980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.093262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.093326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.093618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.093681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.093939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.094003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.094268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.094331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.094610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.094673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.094914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.094982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.095222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.095287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.095558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.095622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.095858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.095923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.096208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.096272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.096518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.096581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.096861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.096926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.097232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.097296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.097550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.097875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.097939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.098198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.098261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.098463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.098529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.098829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.098904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.099195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.099258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.872 [2024-07-15 10:41:06.099541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.872 [2024-07-15 10:41:06.099603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.872 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.099853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.099918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.100114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.100190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.100425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.100488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.100744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.100843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.101105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.101169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.101409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.101483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.101766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.101848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.102152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.102222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.102459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.102521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.102815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.102880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.103135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.103203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.103497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.103560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.103840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.103905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.104172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.104235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.104533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.104597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.104849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.104913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.105218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.105286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.105537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.105602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.105857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.105923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.106128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.106202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.106469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.106534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.106785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.106863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.107159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.107414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.107477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.107696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.107761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.108079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.108154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.108448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.108511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.108726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.108790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.109094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.109158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.109436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.109509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.109762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.109842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.110125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.110188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.110462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.110534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.110774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.110858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.111053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.111120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.111406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.111470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.111709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.111773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.112016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.112080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.112369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.112433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.112710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.112773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.113058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.873 [2024-07-15 10:41:06.113122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.873 qpair failed and we were unable to recover it. 00:24:17.873 [2024-07-15 10:41:06.113406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.113472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.113761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.113843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.114098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.114161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.114440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.114503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.114784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.114863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.115137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.115201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.115463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.115525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.115778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.115865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.116089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.116152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.116397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.116460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.116759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.116845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.117069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.117134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.117431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.117494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.117737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.118105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.118168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.118366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.118430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.118709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.118779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.119042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.119106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.119387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.119461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.119726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.119789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.120090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.120154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.120432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.120495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.120782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.120887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.121138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.121201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.121466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.121529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.121821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.121885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.122124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.122187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.122438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.122501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.122714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.122780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.123060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.123126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.123384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.123446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.123652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.123717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.123957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.124023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.124270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.124333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.124639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.124703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.124980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.125043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.125300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.125364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.125610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.125674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.125934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.125999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.126247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.126313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.126556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.874 [2024-07-15 10:41:06.126630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.874 qpair failed and we were unable to recover it. 00:24:17.874 [2024-07-15 10:41:06.126930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.127004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.127245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.127308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.127598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.127660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.127916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.127980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.128222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.128299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.128567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.128629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.128851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.128916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.129151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.129215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.129497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.129560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.129816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.129884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.130147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.130211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.130493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.130556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.130848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.130912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.131170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.131234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.131515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.131578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.131785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.131866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.132090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.132154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.132403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.132467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.132701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.132765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.133047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.133119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.133374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.133438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.133690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.133753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.134067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.134129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.134419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.134482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.134695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.134758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.135065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.135137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.135437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.135498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.135741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.135829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.136088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.136152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.136384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.136446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.136678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.136741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.137028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.137093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.137386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.137449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.137727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.137790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.138017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.138081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.138366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.138428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.138711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.138775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.139042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.139117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.139385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.139448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.139736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.139827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.140042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.140111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.875 qpair failed and we were unable to recover it. 00:24:17.875 [2024-07-15 10:41:06.140395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.875 [2024-07-15 10:41:06.140457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.140759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.140848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.141066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.141129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.141384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.141447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.141757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.141839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.142090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.142153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.142416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.142478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.142756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.142835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.143126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.143189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.143481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.143544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.143797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.143874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.144175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.144238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.144447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.144513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.144832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.144896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.145151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.145214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.145497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.145559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.145777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.145863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.146123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.146187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.146457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.146521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.146779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.146894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.147186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.147248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.147505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.147568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.147868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.147933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.148234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.148307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.148564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.148627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.148875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.148938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.149190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.149253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.149499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.149561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.149779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.149856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.150087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.150150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.150438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.150500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.150732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.150816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.151074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.151137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.151379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.151441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.151636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.151698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.151937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.876 [2024-07-15 10:41:06.152001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.876 qpair failed and we were unable to recover it. 00:24:17.876 [2024-07-15 10:41:06.152255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.152318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.152527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.152590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.152853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.152918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.153164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.153227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.153502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.153564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.153821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.153884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.154174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.154236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.154517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.154579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.154873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.154938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.155239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.155302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.155601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.155664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.155913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.155980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.156237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.156300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.156549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.156612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.156899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.156964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.157211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.157275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.157573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.157647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.157905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.157969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.158264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.158339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.158573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.158637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.158920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.158985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.159230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.159293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.159577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.159649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.159896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.159961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.160196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.160261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.160549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.160611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.160904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.160969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.161210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.161274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.161526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.161589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.161850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.161914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.162205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.162269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.162506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.162571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.162860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.162925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.163171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.163235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.163527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.163591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.163882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.163946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.164253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.164317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.164567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.164630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.164909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.164975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.165253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.165317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.877 [2024-07-15 10:41:06.165555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.877 [2024-07-15 10:41:06.165618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.877 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.165862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.165926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.166143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.166207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.166471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.166534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.166823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.166897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.167131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.167196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.167470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.167536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.167796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.167967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.168249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.168312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.168567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.168630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.168933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.168998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.169253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.169316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.169599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.169662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.169947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.170012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.170316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.170379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.170632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.170696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.170955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.171019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.171271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.171622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.171686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.171975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.172039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.172298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.172362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.172638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.172702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.173014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.173078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.173374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.173438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.173730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.173794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.174100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.174164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.174441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.174504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.174819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.174882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.175133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.175197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.175443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.175506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.175748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.175829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.176085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.176155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.176395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.176460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.176722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.176786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.177118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.177183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.177431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.177495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.177780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.177868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.178170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.178234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.178473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.178537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.178832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.178909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.179175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.179239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.179540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.878 [2024-07-15 10:41:06.179603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.878 qpair failed and we were unable to recover it. 00:24:17.878 [2024-07-15 10:41:06.179847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.179911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.180164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.180228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.180472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.180535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.180783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.180878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.181138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.181202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.181479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.181543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.181842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.181906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.182174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.182238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.182523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.182595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.182849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.182913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.183197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.183261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.183541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.183605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.183897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.183962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.184247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.184310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.184586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.184650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.184942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.185006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.185300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.185362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.185639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.185703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.185972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.186037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.186316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.186379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.186628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.186692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.186955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.187019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.187316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.187378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.187675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.187738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.188041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.188117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.188398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.188462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.188700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.188766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.189019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.189084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.189380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.189445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.189689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.189753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.190015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.190081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.190366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.190430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.190684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.190748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.191029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.191095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.191372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.191436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.191730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.191819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.192071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.192138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.192422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.192487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.192723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.192790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.193093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.193158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.193408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.879 [2024-07-15 10:41:06.193472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.879 qpair failed and we were unable to recover it. 00:24:17.879 [2024-07-15 10:41:06.193733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.193798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.194061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.194126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.194404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.194468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.194678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.194742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.194984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.195049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.195337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.195401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.195660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.195723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.195972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.196039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.196342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.196407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.196695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.196758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.197036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.197101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.197348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.197414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.197707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.197771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.198041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.198108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.198345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.198409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.198705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.198768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.199077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.199140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.199403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.199465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.199721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.200052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.200126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.200390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.200455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.200729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.200840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.201136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.201199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.201483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.201547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.201869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.201936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.202187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.202250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.202500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.202562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.202859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.202922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.203136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.203199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.203456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.203518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.203821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.203894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.204187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.204250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.204541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.204604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.204866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.204929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.205158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.205221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.205465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.205531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.205746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.205834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.206095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.206160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.206444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.206506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.206789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.206875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.880 qpair failed and we were unable to recover it. 00:24:17.880 [2024-07-15 10:41:06.207121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.880 [2024-07-15 10:41:06.207186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.207438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.207501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.207780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.207869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.208115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.208179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.208460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.208521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.208743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.208820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.209084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.209148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.209393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.209454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.209730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.209792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.210095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.210159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.210445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.210508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.210744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.210824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.211112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.211175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.211382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.211447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.211648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.211711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.212013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.212078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.212322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.212384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.212663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.212726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.212997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.213061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.213294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.213357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.213618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.213682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.213981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.214047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.214346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.214412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.214711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.214774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.215066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.215129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.215420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.215484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.215722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.215787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.216099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.216163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.216457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.216521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.216812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.216876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.217118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.217182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.217425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.217489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.217748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.217836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.218097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.218162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.218411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.218473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.881 [2024-07-15 10:41:06.218754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.881 [2024-07-15 10:41:06.218836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.881 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.219058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.219124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.219416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.219480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.219747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.219825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.220061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.220128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.220405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.220468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.220708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.220772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.221057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.221120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.221365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.221429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.221675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.221970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.222037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.222287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.222351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.222629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.222692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.222937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.223002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.223291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.223364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.223606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.223672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.223936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.224001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.224296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.224360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.224640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.224703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.225011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.225075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.225327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.225391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.225640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.225704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.226040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.226105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.226397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.226461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.226714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.226778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.227096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.227161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.227416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.227480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.227733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.227795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.228064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.228128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.228376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.228440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.228677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.228740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.228979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.229044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.229326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.229390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.229641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.229705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.229953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.230017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.230262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.230326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.230571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.230633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.230912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.230977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.231274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.231346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.231593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.231656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.231868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.231934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.232221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.882 [2024-07-15 10:41:06.232294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.882 qpair failed and we were unable to recover it. 00:24:17.882 [2024-07-15 10:41:06.232586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.232650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.232898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.232963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.233251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.233314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.233608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.233673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.233925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.233990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.234240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.234304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.234583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.234646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.234943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.235009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.235269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.235335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.235627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.235695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.235931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.236000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.236286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.236349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.236605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.236668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.236932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.236996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.237241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.237304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.237511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.237574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.237857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.237922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.238167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.238233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.238486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.238550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.238815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.238880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.239101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.239165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.239441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.239503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.239759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.239837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.240127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.240191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.240404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.240467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.240722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.240786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.241084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.241149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.241457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.241520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.241834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.241899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.242155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.242219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.242439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.242503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.242753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.242834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.243084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.243150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.243390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.243455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.243681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.243745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.243969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.244035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.244280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.244353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.244639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.244716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.244994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.245059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.245299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.245364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.245587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.245650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.883 qpair failed and we were unable to recover it. 00:24:17.883 [2024-07-15 10:41:06.245897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.883 [2024-07-15 10:41:06.245963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.246203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.246269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.246555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.246619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.246872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.246937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.247186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.247250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.247441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.247505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.247769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.247845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.248136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.248461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.248524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.248812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.248876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.249111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.249174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.249430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.249492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.249773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.249869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.250173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.250236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.250518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.250580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.250837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.250925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.251178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.251241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.251521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.251584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.251795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.251870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.252122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.252185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.252388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.252450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.252703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.252765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.253025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.253088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.253382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.253445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.253693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.253756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.254051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.254115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.254351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.254427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.254724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.254787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.255066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.255129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.255352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.255416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.255668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.255731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.256012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.256077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.256367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.256430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.256672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.256735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.257003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.257067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.257259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.257322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.257561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.257623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.257924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.257990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.258273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.258337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.258578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.258934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.259000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.884 [2024-07-15 10:41:06.259202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.884 [2024-07-15 10:41:06.259267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.884 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.259509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.259573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.259818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.259882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.260156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.260220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.260460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.260523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.260775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.260850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.261117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.261180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.261434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.261498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.261775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.261853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.262062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.262125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.262368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.262432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.262666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.262729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.263000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.263080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.263316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.263378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.263655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.263718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.263991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.264055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.264335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.264398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.264644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.264708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.264909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.264973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.265185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.265249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.265499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.265562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.265851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.265917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.266170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.266233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.266487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.266549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.266834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.266900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.267137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.267200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.267449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.267513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.267812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.267876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.268116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.268183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.268457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.268521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.268760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.268836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.269096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.269159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.269396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.269458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.269727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.269789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.270006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.270069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.270356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.270419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.885 qpair failed and we were unable to recover it. 00:24:17.885 [2024-07-15 10:41:06.270703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.885 [2024-07-15 10:41:06.270765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.271040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.271107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.271399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.271461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.271714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.271787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.272015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.272082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.272333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.272397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.272645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.272708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.272977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.273043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.273278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.273342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.273610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.273672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.273967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.274032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.274279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.274343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.274577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.274643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.274917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.274982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.275260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.275323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.275538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.275600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.275840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.275904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.276144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.276207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.276489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.276552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.276811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.276875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.277158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.277222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.277433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.277498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.277710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.277773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.278071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.278386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.278450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.278708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.278772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.279036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.279099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.279343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.279407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.279690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.279753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.280020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.280083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.280343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.280406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.280714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.280778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.281047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.281110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.281400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.281464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.281674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.281740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.282063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.282131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.282381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.282443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.282708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.282771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.283042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.283108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.283364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.283427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.283670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.283734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.886 [2024-07-15 10:41:06.283992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.886 [2024-07-15 10:41:06.284056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.886 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.284302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.284367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.284662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.284726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.285007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.285071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.285368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.285431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.285728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.285791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.286105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.286169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.286374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.286439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.286680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.286745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.287056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.287122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.287370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.287434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.287693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.287762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.288078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.288142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.288495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.288769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.288848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.289065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.289129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.289332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.289395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.289642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.289706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.289988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.290053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.290345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.290407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.290625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.290687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.290968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.291040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.291325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.291387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.291597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.291661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.291907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.291982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.292283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.292348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.292550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.292613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.292858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.292922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.293110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.293173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.293461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.293524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.293819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.293894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.294154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.294216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.294456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.294520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.294716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.294783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.295022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.295087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.295326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.295393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.295654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.295719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.295988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.296053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.296260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.296324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.296571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.296634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.296908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.887 [2024-07-15 10:41:06.296974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.887 qpair failed and we were unable to recover it. 00:24:17.887 [2024-07-15 10:41:06.297191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.297255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.297504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.297567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.297757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.298073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.298138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.298422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.298485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.299641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.299689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.299839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.299870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.300965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.300993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.301121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.301150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.301296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.301324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.301439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.301471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.301593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.301621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.301735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.301762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.301905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.301934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.302938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.302966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.303901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.303929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.304930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.304958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.888 qpair failed and we were unable to recover it. 00:24:17.888 [2024-07-15 10:41:06.305048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.888 [2024-07-15 10:41:06.305075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.305165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.305192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.305311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.305339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.305471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.305498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.305613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.305639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.305725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.305752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.305882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.305908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.306937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.306965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.307960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.307988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.308891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.308918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.889 [2024-07-15 10:41:06.309841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.889 [2024-07-15 10:41:06.309869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.889 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.309963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.309990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.310961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.310993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.311116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.311145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.311268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.311295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.311441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.311470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.311592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.311619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.311737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.311873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.312942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.312968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.313894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.313985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.314012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.314135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.314164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.314305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.314333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.314455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.314482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.314628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.314656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.314769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.314807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.315574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.315607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.315730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.315759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.315891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.315919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.316020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.316047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.316161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.316187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.890 [2024-07-15 10:41:06.316329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.890 [2024-07-15 10:41:06.316356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.890 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.316451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.316478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.316593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.316619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.316711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.316739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.316858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.316884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.316977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.317884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.317912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.318867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.318986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.319908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.319935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.320867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.320976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.321120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.321283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.321424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.321572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.321695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.891 [2024-07-15 10:41:06.321865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.891 [2024-07-15 10:41:06.321903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.891 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.322043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.322298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.322381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.322653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.322689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.322939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.322969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.323059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.323088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.323199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.323237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.323431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.323524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.323753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.323787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.323944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.323973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.324063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.324091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.324220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.324250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.324342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.324371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.324514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.324549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.324755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.324854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.324963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.324991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.325088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.325116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.325261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.325301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.325526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.325555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.325775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.325823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.325948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.325977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.326074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.326102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.326203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.326231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.326379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.326408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.326583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.326645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.326890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.326920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.327012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.327041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.327158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.327188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.327331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.327367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.327513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.327555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.327737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.327773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.327904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.327933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.328034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.328061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.328215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.328253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.328415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.328482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.328646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.328717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.328878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.328907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.329005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.329032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.329170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.329216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.329360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.329418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.329582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.329616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.892 [2024-07-15 10:41:06.329740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.892 [2024-07-15 10:41:06.329766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.892 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.329899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.329947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.330898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.330925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.331961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.331988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.332933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.332961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.333969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.333997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.334096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.334124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.334248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.334276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.334363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.334389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.334528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.334557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.893 [2024-07-15 10:41:06.334675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.893 [2024-07-15 10:41:06.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.893 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.334827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.334862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.334975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.335895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.335923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.336869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.336913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.337889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.337924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.338065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.338101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.338252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.338288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.338435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.338470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.338591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.338625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.338746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.338774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.338881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.338910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.339006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.339035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.339176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.339211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.339375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.339409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.339525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.339559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.339706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.339740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.339889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.339918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.340022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.340050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.340173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.340200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.340360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.340395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.340530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.340565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.340734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.340762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.894 qpair failed and we were unable to recover it. 00:24:17.894 [2024-07-15 10:41:06.340869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.894 [2024-07-15 10:41:06.340897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.340983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.341016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.341114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.341142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.341300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.341362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.341521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.341578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.341712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.341740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.341846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.341876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.341981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.342129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.342352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.342552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.342700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.342835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.342971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.342999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.343183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.343337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.343464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.343603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.343754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.343895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.343998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.344873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.344982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.345011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.345141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.345170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:17.895 [2024-07-15 10:41:06.345270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.895 [2024-07-15 10:41:06.345297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:17.895 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.345420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.345449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.345596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.345625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.345717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.345745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.345867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.345897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.345995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.346024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.346129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.346159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.346284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.346327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.346455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.346516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.346743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.346778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.346905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.346936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.347092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.347267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.347424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.347584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.347749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.347889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.347982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.348010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.348136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.348165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.348305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.348340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.348497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.348555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.348676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.348712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.348891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.348933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.349039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.349080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.349217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.349246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.349334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.349361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.349513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.173 [2024-07-15 10:41:06.349541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.173 qpair failed and we were unable to recover it. 00:24:18.173 [2024-07-15 10:41:06.349674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.349708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.349863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.349891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.350040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.350079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.350233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.350261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.350423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.350457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.350643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.350676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.350786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.350827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.350967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.350995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.351095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.351124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.351214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.351360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.351388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.351489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.351533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.351709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.351743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.351902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.351931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.352044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.352196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.352337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.352522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.352682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.352879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.352997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.353026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.353135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.353169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.353292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.353328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.353473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.353518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.353726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.353763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.353924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.353956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.354059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.354088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.354178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.354208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.354357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.354393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.354570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.354605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.354710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.354745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.354914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.354943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.355036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.355069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.355190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.355219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.355307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.355334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.355634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.355709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.355890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.355919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.356067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.356096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.356230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.356266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.356414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.356483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.174 [2024-07-15 10:41:06.356727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.174 [2024-07-15 10:41:06.356792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.174 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.356976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.357005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.357141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.357181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.357296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.357322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.357558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.357604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.357734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.357767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.357936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.357971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.358100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.358140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.358313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.358378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.358596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.358661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.358864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.358901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.359025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.359060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.359200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.359235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.359511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.359576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.359786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.359832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.359999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.360033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.360220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.360255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.360392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.360430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.360681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.360746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.360933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.360968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.361099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.361134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.361331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.361394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.361647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.361715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.361962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.361996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.362142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.362177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.362377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.362442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.362732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.362797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.362987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.363021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.363167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.363207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.363346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.363381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.363491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.363525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.363720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.363754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.363884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.363920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.364069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.364114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.364253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.364288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.364415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.364450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.175 qpair failed and we were unable to recover it. 00:24:18.175 [2024-07-15 10:41:06.364655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.175 [2024-07-15 10:41:06.364722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.364934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.364970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.365107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.365142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.365282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.365316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.365432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.365466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.365683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.365719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.365891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.365926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.366051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.366086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.366274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.366339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.366623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.366701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.366963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.367017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.367218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.367297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.367556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.367597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.367763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.367797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.367921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.367956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.368068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.368104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.368242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.368277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.368407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.368441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.368692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.368758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.369018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.369054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.369230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.369265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.369427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.369480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.369694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.369728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.369873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.369909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.370018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.370052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.370186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.370480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.370546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.370788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.370831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.370943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.370978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.371090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.371124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.371270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.371304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.371417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.371453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.371679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.371744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.371963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.372028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.372292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.372357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.372597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.372662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.372914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.372948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.373247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.373312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.373605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.373677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.373951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.374018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.374279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.176 [2024-07-15 10:41:06.374344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.176 qpair failed and we were unable to recover it. 00:24:18.176 [2024-07-15 10:41:06.374625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.374660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.374827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.374873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.374982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.375016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.375131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.375167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.375411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.375478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.375722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.375775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.376009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.376044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.376220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.376255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.376457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.376521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.376777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.376831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.376980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.377015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.377230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.377319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.377530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.377599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.377832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.377901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.378163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.378198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.378313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.378349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.378491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.378526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.378827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.378869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.379009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.379068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.379319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.379385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.379628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.379693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.379928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.379963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.380117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.380151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.380295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.380330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.380491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.380556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.380755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.380845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.381132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.381167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.381349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.381412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.381638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.381702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.381967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.382036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.382286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.382352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.382554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.382621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.177 [2024-07-15 10:41:06.382852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.177 [2024-07-15 10:41:06.382920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.177 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.383178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.383213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.383327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.383362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.383468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.383503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.383670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.383705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.383978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.384046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.384322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.384357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.384501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.384536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.384734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.384768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.384902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.384938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.385212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.385277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.385511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.385576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.385854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.385921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.386143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.386209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.386491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.386557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.386816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.386886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.387125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.387190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.387391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.387459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.387703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.387737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.387861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.387902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.388088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.388123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.388239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.388274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.388454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.388521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.388854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.388922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.389183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.389219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.389321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.389354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.389505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.389540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.389684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.389718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.389939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.390006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.390312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.390366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.390617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.390683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.390973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.391039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.391289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.391355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.391663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.391728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.392012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.392088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.392333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.392401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.392622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.392689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.392985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.393052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.393310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.393377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.393628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.178 [2024-07-15 10:41:06.393696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.178 qpair failed and we were unable to recover it. 00:24:18.178 [2024-07-15 10:41:06.394012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.394075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.394343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.394410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.394665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.394730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.395028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.395096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.395410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.395476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.395769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.395862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.396183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.396236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.396436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.396513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.396732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.396799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.397093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.397161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.397370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.397424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.397617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.397699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.398002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.398069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.398318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.398384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.398579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.398644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.398899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.398965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.399233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.399299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.399502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.399571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.399879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.399947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.400191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.400269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.400562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.400616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.400861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.400897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.401034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.401070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.401328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.401394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.401645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.401709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.402011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.402077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.402326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.402390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.402683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.402748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.403013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.403081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.403334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.403399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.403699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.403764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.404029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.404101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.404389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.404454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.404745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.404839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.405105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.405170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.405418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.405484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.405746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.405833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.406104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.406171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.406461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.406527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.406832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.406900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.407147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.179 [2024-07-15 10:41:06.407212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.179 qpair failed and we were unable to recover it. 00:24:18.179 [2024-07-15 10:41:06.407505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.407558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.407832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.407898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.408153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.408221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.408479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.408545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.408855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.408921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.409219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.409254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.409395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.409428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.409632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.409697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.409959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.410027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.410269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.410335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.410584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.410651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.410942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.410997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.411165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.411237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.411472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.411539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.411815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.411884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.412162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.412227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.412513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.412579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.412856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.412924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.413203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.413278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.413572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.413637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.413928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.413996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.414290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.414598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.414664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.414928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.414995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.415282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.415353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.415606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.415674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.415955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.415991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.416159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.416195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.416416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.416480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.416697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.416758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.417061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.417127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.417385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.417450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.417683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.417748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.418051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.418117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.418398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.418465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.418712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.418777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.419093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.419159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.419395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.419461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.419744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.419828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.420113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.420178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.180 [2024-07-15 10:41:06.420447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.180 [2024-07-15 10:41:06.420510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.180 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.420775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.420870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.421153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.421219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.421431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.421496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.421789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.421858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.422105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.422170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.422468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.422520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.422691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.422764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.423082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.423148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.423398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.423463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.423706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.423775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.424060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.424129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.424379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.424445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.424732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.424799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.425080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.425148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.425403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.425469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.425699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.425765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.426039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.426105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.426393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.426469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.426767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.426863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.427140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.427206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.427483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.427549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.427838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.427906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.428172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.428237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.428512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.428547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.428693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.428729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.428979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.429045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.429330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.429396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.429643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.429711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.430016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.430083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.430359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.430394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.430563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.430598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.430844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.430917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.431204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.431270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.431485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.431550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.431856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.431923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.432215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.432282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.432560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.432595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.432740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.432776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.433042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.433108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.433309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.433374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.181 qpair failed and we were unable to recover it. 00:24:18.181 [2024-07-15 10:41:06.433614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.181 [2024-07-15 10:41:06.433682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.433902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.433970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.434253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.434319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.434516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.434583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.434848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.434915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.435160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.435227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.435508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.435573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.435868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.436190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.436257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.436495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.436560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.436816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.436885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.437142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.437209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.437427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.437495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.437785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.437866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.438155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.438220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.438523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.438576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.438768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.438860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.439118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.439195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.439464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.439529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.439840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.439908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.440199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.440264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.440516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.440580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.440845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.440899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.441102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.441182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.441463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.441528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.441821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.441887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.442182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.442247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.442504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.442569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.442858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.442924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.443201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.443267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.443547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.443612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.443914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.443982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.182 [2024-07-15 10:41:06.444277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.182 [2024-07-15 10:41:06.444343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.182 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.444589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.444655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.444945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.445012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.445221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.445289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.445520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.445585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.445838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.445905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.446148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.446213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.446422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.446487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.446736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.446821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.447105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.447179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.447482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.447548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.447827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.447894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.448156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.448221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.448521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.448588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.448861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.448916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.449088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.449167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.449431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.449495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.449751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.449829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.450077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.450145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.450411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.450477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.450772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.450853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.451054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.451366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.451434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.451731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.451784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.451950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.452004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.452240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.452314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.452596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.452662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.452893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.452960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.453158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.453223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.453505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.453572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.453872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.453938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.454217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.454282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.454561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.454627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.454886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.454954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.455170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.455238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.455528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.455592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.455889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.455955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.456198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.456262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.456515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.456579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.456857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.456927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.457217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.457283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.183 qpair failed and we were unable to recover it. 00:24:18.183 [2024-07-15 10:41:06.457534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.183 [2024-07-15 10:41:06.457599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.457841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.457908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.458116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.458181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.458433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.458497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.458692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.458760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.459011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.459075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.459371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.459436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.459733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.459815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.460027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.460103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.460322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.460390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.460655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.460690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.460878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.460957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.461171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.461226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.461467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.461519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.461736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.461790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.461949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.461979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.462101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.462156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.462365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.462415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.462649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.462700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.462923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.462954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.463141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.463192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.463430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.463481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.463754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.463815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.463972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.464000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.464148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.464202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.464442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.464493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.464740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.464790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.464921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.464949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.465125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.465177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.465383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.465436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.465692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.465744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.465938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.465968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.466111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.466175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.466432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.466484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.466709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.466764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.466963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.466992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.467149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.467177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.467376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.467405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.467669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.467728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.467962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.467992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.468120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.184 [2024-07-15 10:41:06.468147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.184 qpair failed and we were unable to recover it. 00:24:18.184 [2024-07-15 10:41:06.468235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.468290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.468491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.468542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.468739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.468792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.468973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.469001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.469111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.469141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.469391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.469442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.469718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.469770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.469996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.470026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.470181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.470232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.470417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.470469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.470678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.470712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.470865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.470896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.471017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.471046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.471194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.471242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.471484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.471551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.471732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.471762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.471929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.471959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.472049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.472086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.472238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.472269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.472460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.472492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.472776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.472861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.472947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.472976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.473107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.473174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.473464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.473542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.473785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.473871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.473968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.473999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.474221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.474255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.474431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.474465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.474715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.474780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.474967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.474996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.475223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.475298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.475542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.475610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.475893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.475923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.476048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.476089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.476182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.476212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.476400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.476468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.476756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.476849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.477014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.477044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.477308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.477372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.477530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.477609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.185 [2024-07-15 10:41:06.477833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.185 [2024-07-15 10:41:06.477897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.185 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.478049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.478079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.478357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.478422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.478620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.478687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.478899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.478929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.479029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.479058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.479236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.479299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.479530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.479594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.479841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.479902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.480021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.480051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.480189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.480218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.480457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.480523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.480761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.480850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.480984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.481013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.481207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.481273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.481551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.481620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.481912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.481978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.482273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.482339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.482598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.482663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.482926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.482992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.483244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.483298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.483570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.483635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.483880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.483948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.484237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.484312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.484562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.484627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.484825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.484893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.485116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.485185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.485481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.485546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.485841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.485906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.486190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.486224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.186 [2024-07-15 10:41:06.486372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.186 [2024-07-15 10:41:06.486406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.186 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.486666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.486718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.487001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.487066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.487363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.487428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.487713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.487777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.487994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.488061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.488352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.488417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.488680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.488748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.489024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.489092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.489305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.489372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.489667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.489719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.489918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.489999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.490286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.490349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.490605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.490669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.490949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.491015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.491308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.491373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.491576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.491642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.491894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.491962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.492171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.492238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.492529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.492594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.492877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.492947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.493207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.493262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.493468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.493548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.493846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.493912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.494209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.494274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.494582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.494646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.494904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.494971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.495217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.495285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.495578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.495644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.495933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.496000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.496235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.496296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.496512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.496579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.496846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.496914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.497164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.497238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.497528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.497593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.497882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.497948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.498191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.498256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.498499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.498567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.498858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.498924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.499166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.499235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.499532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.499597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.187 qpair failed and we were unable to recover it. 00:24:18.187 [2024-07-15 10:41:06.499882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.187 [2024-07-15 10:41:06.499948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.500237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.500302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.500646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.500711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.500945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.501013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.501295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.501362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.501572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.501640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.501951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.502018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.502291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.502356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.502611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.502679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.502971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.503023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.503258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.503322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.503614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.503679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.503904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.503970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.504231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.504297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.504540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.504605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.504828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.504896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.505183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.505248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.505461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.505529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.505831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.505898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.506157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.506226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.506501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.506565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.506845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.506914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.507181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.507235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.507437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.507516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.507725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.507792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.508074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.508142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.508365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.508431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.508688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.508755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.509069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.509136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.509413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.509479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.509762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.509848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.510100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.510168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.510417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.510493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.510782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.510825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.510991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.511026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.511299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.511364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.511659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.511710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.511893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.511971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.512225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.512291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.512528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.512592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.512876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.188 [2024-07-15 10:41:06.512944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.188 qpair failed and we were unable to recover it. 00:24:18.188 [2024-07-15 10:41:06.513228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.513293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.513557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.513622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.513901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.513968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.514250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.514315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.514608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.514673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.514954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.515023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.515278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.515345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.515629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.515694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.515991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.516058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.516337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.516371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.516535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.516570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.516853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.516919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.517206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.517274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.517575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.517641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.517939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.518005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.518297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.518363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.518616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.518681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.518903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.518971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.519263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.519345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.519602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.519669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.519935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.520001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.520270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.520336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.520630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.520696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.520993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.521071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.521366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.521432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.521714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.521780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.522085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.522152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.522408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.522474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.522701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.522736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.522881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.522918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.523053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.523093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.523227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.523260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.523470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.523536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.523731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.523795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.524044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.524110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.524382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.524447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.524694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.524762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.525063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.525141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.525364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.525429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.525725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.525790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.189 [2024-07-15 10:41:06.526070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.189 [2024-07-15 10:41:06.526142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.189 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.526380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.526445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.526696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.526761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.527066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.527132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.527436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.527488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.527662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.527717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.527931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.528000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.528197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.528263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.528546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.528611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.528877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.528942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.529140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.529207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.529474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.529539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.529749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.529827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.530114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.530149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.530296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.530331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.530490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.530556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.530781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.530873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.531163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.531228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.531479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.531553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.531797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.531881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.532133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.532198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.532498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.532856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.532924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.533186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.533251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.533488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.533553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.533841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.533907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.534156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.534222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.534499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.534533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.534674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.534707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.534905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.534971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.535251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.535286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.535426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.535462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.535734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.535814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.536063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.536129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.536411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.536478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.536766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.190 [2024-07-15 10:41:06.536866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.190 qpair failed and we were unable to recover it. 00:24:18.190 [2024-07-15 10:41:06.537121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.537189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.537451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.537517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.537772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.537862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.538144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.538209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.538462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.538524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.538733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.538799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.539062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.539127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.539335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.539408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.539698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.539765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.540015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.540088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.540380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.540435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.540691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.540727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.540874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.540910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.541124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.541178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.541417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.541483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.541777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.541872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.542165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.542231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.542478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.542543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.542837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.542904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.543206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.543276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.543518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.543584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.543876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.543942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.544203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.544271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.544567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.544635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.544878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.544946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.545163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.545232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.545523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.545590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.545882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.545952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.546251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.546318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.546568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.546633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.546896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.546961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.547186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.547253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.547507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.547574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.547815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.547884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.548154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.548224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.548462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.548535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.548852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.548921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.549207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.549273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.549515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.549580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.549873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.549950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.191 [2024-07-15 10:41:06.550182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.191 [2024-07-15 10:41:06.550251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.191 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.550540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.550605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.550882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.550949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.551256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.551571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.551638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.551872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.551940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.552196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.552262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.552495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.552565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.552775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.552881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.553184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.553250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.553522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.553578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.553744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.553832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.554138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.554190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.554353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.554404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.554636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.554701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.554987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.555058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.555281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.555350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.555621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.555689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.555905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.555945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.556059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.556092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.556211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.556247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.556359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.556391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.556616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.556691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.556953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.557020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.557277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.557343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.557641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.557707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.557981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.558053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.558309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.558374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.558623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.558658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.558772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.558821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.559074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.559108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.559254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.559290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.559514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.559580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.559841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.559913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.560161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.560226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.560480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.560546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.560829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.560908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.561201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.561271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.561530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.561564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.561701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.561737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.562000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.192 [2024-07-15 10:41:06.562067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.192 qpair failed and we were unable to recover it. 00:24:18.192 [2024-07-15 10:41:06.562296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.562363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.562639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.562707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.563041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.563119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.563373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.563440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.563737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.563817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.564033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.564110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.564403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.564469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.564719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.564794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.565100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.565166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.565462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.565534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.565792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.565863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.566134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.566208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.566507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.566572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.566871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.566944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.567214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.567281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.567469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.567530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.567844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.567912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.568201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.568268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.568544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.568610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.568861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.568927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.569153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.569228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.569479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.569556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.569861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.569935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.570178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.570502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.570567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.570854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.570907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.571081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.571159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.571411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.571483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.571732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.571822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.572082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.572149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.572401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.572467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.572660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.572728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.573030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.573096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.573385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.573451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.573713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.573781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.574025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.574093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.574346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.574412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.574698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.574769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.575067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.575133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.575410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.575476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.575735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.193 [2024-07-15 10:41:06.575792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.193 qpair failed and we were unable to recover it. 00:24:18.193 [2024-07-15 10:41:06.576089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.576159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.576379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.576451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.576704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.576769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.577033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.577103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.577356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.577423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.577690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.577749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.577948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.578003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.578249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.578314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.578576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.578644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.578904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.578979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.579260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.579330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.579526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.579595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.579892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.579962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.580234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.580304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.580570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.580635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.580888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.580961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.581158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.581224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.581506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.581578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.581872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.581939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.582128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.582193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.582440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.582482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.582630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.582664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.582882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.582948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.583206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.583276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.583490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.583556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.583849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.583916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.584201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.584276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.584528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.584594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.584892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.584960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.585205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.585273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.585482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.585555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.585857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.585925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.586214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.586280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.586576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.586628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.586876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.586952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.587183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.587250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.587446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.587527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.587834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.587900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.588198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.588263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.588516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.588584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.588829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.194 [2024-07-15 10:41:06.588897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.194 qpair failed and we were unable to recover it. 00:24:18.194 [2024-07-15 10:41:06.589121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.589188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.589481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.589547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.589766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.589873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.590162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.590228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.590514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.590583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.590823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.590898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.591136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.591203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.591499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.591565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.591825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.591897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.592163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.592228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.592506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.592577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.592842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.592915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.593176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.593247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.593502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.593570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.593849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.593917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.594203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.594270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.594558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.594622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.594907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.594974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.595248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.595313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.595571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.595644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.595857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.595927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.596208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.596273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.596474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.596509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.596675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.596708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.596978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.597044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.597237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.597301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.597582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.597615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.597732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.597767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.597983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.598049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.598294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.598362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.598644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.598708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.599013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.599078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.599373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.599438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.599734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.599798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.600062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.195 [2024-07-15 10:41:06.600129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.195 qpair failed and we were unable to recover it. 00:24:18.195 [2024-07-15 10:41:06.600378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.600444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.600628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.600692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.600983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.601048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.601296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.601360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.601602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.601669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.601901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.601965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.602178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.602242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.602483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.602549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.602810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.602854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.603019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.603053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.603261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.603346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.603611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.603664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.603838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.603914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.604206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.604272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.604519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.604583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.604860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.604933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.605211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.605244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.605409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.605442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.605694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.605758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.606018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.606084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.606333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.606397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.606685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.606749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.607011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.607075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.607316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.607380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.607596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.607670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.607931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.607998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.608236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.608302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.608550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.608614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.608895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.608961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.609210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.609274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.609484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.609552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.609793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.609904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.610143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.610208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.610504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.610567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.610860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.610925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.611171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.611239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.611500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.611565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.611851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.611916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.612180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.612234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.612496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.612560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.196 [2024-07-15 10:41:06.612820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.196 [2024-07-15 10:41:06.612885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.196 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.613136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.613199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.613437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.613500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.613772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.613874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.614165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.614230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.614435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.614499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.614707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.614773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.615078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.615143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.615389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.615456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.615678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.615742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.615966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.616032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.616322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.616387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.616674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.616737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.617027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.617091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.617334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.617369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.617512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.617546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.617798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.617879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.618132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.618405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.618470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.618708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.618774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.619080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.619146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.619386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.619449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.619731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.619765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.619921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.619956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.620169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.620228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.620485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.620549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.620848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.620914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.621162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.621227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.621456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.621520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.621771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.621869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.622064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.622129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.622368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.622432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.622682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.622745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.623039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.623105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.623317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.623384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.623673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.623737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.623966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.624033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.624270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.624334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.624560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.624624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.624878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.624947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.625242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.197 [2024-07-15 10:41:06.625306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.197 qpair failed and we were unable to recover it. 00:24:18.197 [2024-07-15 10:41:06.625552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.625616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.625903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.625967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.626204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.626270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.626560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.626611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.626769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.626833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.627016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.627092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.627341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.627405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.627658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.627723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.627927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.627993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.628310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.628375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.628591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.628658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.628929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.628965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.629102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.629136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.629337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.629403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.629639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.629703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.629971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.630040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.630231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.630296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.630585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.630649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.630893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.630967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.631206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.631271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.631482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.631546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.631826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.631860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.632000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.632034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.632210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.632287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.632581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.632645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.632929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.632995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.633263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.633327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.633582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.633646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.633929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.633997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.634281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.634344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.634617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.634681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.634964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.635030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.635314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.635378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.635629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.635692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.635980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.636045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.636263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.636329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.636624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.636676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.636867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.636953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.637249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.637313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.637602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.637666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.637917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.198 [2024-07-15 10:41:06.637982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.198 qpair failed and we were unable to recover it. 00:24:18.198 [2024-07-15 10:41:06.638224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.638288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.638490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.638559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.638767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.638847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.639104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.639168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.639407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.639473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.639710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.639776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.640048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.640113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.640361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.640425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.640685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.640748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.641056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.641122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.641350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.641414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.641691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.641756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.642110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.199 [2024-07-15 10:41:06.642176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.199 qpair failed and we were unable to recover it. 00:24:18.199 [2024-07-15 10:41:06.642418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.642483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.642718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.642782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.643097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.643148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.643402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.643466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.643754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.643831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.644063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.644127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.644360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.644423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.644666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.644733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.645053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.645118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.645383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.645462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.645669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.646029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.646095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.646369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.646434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.646681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.646748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.647023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.647058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.647197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.647232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.647409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.647474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.647738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.647820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.648119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.648183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.648444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.648508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.648759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.648841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.649063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.649129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.649368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.649431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.649692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.649770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.650034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.650099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.650381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.650446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.650646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.650715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.650936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.651002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.651290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.651355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.651590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.651654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.651875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.651947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.652189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.652253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.652505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.652569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.652830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.652895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.653179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.653243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.653483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.653550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.653845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.653912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.654193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.654259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.654499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.200 [2024-07-15 10:41:06.654563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.200 qpair failed and we were unable to recover it. 00:24:18.200 [2024-07-15 10:41:06.654855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.654920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.655170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.655237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.655477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.655543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.655798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.655877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.656068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.656133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.656415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.656733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.656797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.657109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.657172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.657426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.657490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.657777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.657857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.658140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.658213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.658506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.658569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.658847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.658914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.659167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.659235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.659458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.659522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.659730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.659797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.660105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.660169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.660440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.660504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.660757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.660840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.661133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.661197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.661445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.661511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.661722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.661789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.662037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.662102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.662379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.662442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.662648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.662712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.663020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.663087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.663344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.663410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.663705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.663770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.664074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.664139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.664382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.664448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.664688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.664752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.664992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.665058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.665339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.665373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.665484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.665517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.665699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.665766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.665984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.666051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.666343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.666408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.666693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.666759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.667057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.667124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.667409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.667473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.667761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.201 [2024-07-15 10:41:06.667842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.201 qpair failed and we were unable to recover it. 00:24:18.201 [2024-07-15 10:41:06.668085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.668149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.668445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.668509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.668747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.668780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.668933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.668967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.669226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.669290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.669570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.669634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.669889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.670168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.670233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.670477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.670541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.670830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.670906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.671193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.671257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.671516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.671579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.671867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.671932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.672139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.672203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.672450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.672517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.672784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.672864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.673056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.673119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.673400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.673464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.673720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.673785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.674017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.674080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.674323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.674386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.674663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.674726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.674960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.675025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.675283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.675347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.675587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.675650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.675904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.675972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.676180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.676244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.676522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.676585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.676798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.676883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.677089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.677156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.677440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.677504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.677764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.677845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.678056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.678120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.678383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.678446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.678651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.678716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.678997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.679065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.679353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.679426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.679640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.679703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.679939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.680007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.680261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.680325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.202 [2024-07-15 10:41:06.680529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.202 [2024-07-15 10:41:06.680592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.202 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.680879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.680945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.681237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.681290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.681534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.681599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.681836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.681902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.682161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.682228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.682476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.682540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.682776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.682868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.683112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.683179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.683405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.683470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.683729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.683793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.684056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.684121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.684324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.684390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.684655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.684719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.684975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.685041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.685296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.685359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.685616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.685683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.685931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.685997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.686265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.686328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.686518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.686582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.686856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.686925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.687218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.687281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.687558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.687621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.687925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.687992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.688279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.688343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.688647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.688711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.689010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.689075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.689282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.689346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.689589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.689656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.689917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.689983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.690229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.690294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.690534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.690598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.690875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.690940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.691179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.691246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.691531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.691595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.691842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.691907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.692090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.692165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.692439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.692503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.692758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.692834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.693058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.693124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.693372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.693435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.203 qpair failed and we were unable to recover it. 00:24:18.203 [2024-07-15 10:41:06.693674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.203 [2024-07-15 10:41:06.693737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.693992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.694057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.694272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.694338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.694616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.694679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.694924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.694992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.695294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.695359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.695645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.695708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.695963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.696028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.696314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.696378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.696685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.696750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.697010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.697075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.697318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.697385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.697616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.697681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.697935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.698003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.698249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.698315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.698586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.698650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.698902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.698968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.699215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.699278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.699526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.699590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.699796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.699875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.700118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.700206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.700576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.700670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.700976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.701043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.701295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.701360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.701611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.701675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.701929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.701995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.702259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.702323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.702579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.702670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.703043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.703115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.703341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.703408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.703688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.703753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.704050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.704114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.704365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.704438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.204 [2024-07-15 10:41:06.704709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.204 [2024-07-15 10:41:06.704774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.204 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.705107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.705199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.705572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.705697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.706072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.706191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.706511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.706605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.706888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.706962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.707262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.707330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.707589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.707673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.707977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.708047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.708341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.708409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.708672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.708739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.709053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.709133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.709384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.709475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.709775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.709872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.710144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.710236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.710542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.710611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.485 [2024-07-15 10:41:06.710913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.485 [2024-07-15 10:41:06.710980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.485 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.711239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.711306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.711599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.711663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.711947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.712013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.712306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.712371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.712622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.712688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.712981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.713047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.713289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.713351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.713577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.713642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.713891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.713960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.714252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.714315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.714592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.714655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.714911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.714978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.715281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.715345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.715600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.715664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.715908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.715974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.716253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.716316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.716564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.716630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.716854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.716921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.717205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.717269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.717558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.717621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.717913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.717978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.718223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.718287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.718563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.718627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.718872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.718937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.719124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.719188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.719481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.719555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.719833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.719898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.720145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.720209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.486 [2024-07-15 10:41:06.720456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.486 [2024-07-15 10:41:06.720521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.486 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.720825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.720890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.721138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.721202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.721456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.721519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.721731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.721795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.722065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.722132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.722412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.722475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.722717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.722780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.723053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.723117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.723364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.723426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.723682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.724055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.724121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.724402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.724466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.724706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.724772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.725075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.725139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.725429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.725493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.725743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.725825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.726047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.726113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.726368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.726433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.726683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.726746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.727029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.727094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.727337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.727402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.727649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.727712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.728003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.728068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.728332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.728399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.728616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.728680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.728927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.728993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.729238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.729303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.729555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.729621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.729880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.729945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.730202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.487 [2024-07-15 10:41:06.730265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.487 qpair failed and we were unable to recover it. 00:24:18.487 [2024-07-15 10:41:06.730508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.730573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.730867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.730932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.731210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.731274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.731566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.731629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.731916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.731982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.732275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.732338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.732586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.732660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.732939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.733005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.733248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.733312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.733516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.733583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.733845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.733912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.734158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.734224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.734435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.734502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.734696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.734762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.735015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.735080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.735326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.735394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.735682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.735746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.735997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.736062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.736266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.736329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.736576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.736639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.736872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.736941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.737228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.737294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.737540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.737604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.737851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.737916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.738127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.738190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.738387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.738451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.738688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.738753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.739026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.739090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.739329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.739393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.739638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.739701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.739929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.488 [2024-07-15 10:41:06.739997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.488 qpair failed and we were unable to recover it. 00:24:18.488 [2024-07-15 10:41:06.740203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.740269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.740549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.740613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.740913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.740979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.741219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.741281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.741529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.741592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.741823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.741892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.742185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.742248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.742493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.742556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.742849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.742915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.743156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.743219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.743424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.743487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.743738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.743833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.744125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.744188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.744422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.744485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.744724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.744789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.745058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.745132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.745377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.745441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.745689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.745752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.745979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.746046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.746275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.746340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.746588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.746655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.746931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.746998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.747227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.747290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.747531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.747594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.747820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.747884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.748119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.748183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.748420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.748484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.748762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.748840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.749089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.749153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.749398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.489 [2024-07-15 10:41:06.749463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.489 qpair failed and we were unable to recover it. 00:24:18.489 [2024-07-15 10:41:06.749754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.749830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.750055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.750118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.750367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.750430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.750677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.750743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.750974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.751039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.751307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.751371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.751620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.751684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.751909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.751974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.752270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.752333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.752543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.752609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.752902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.752967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.753210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.753275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.753538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.753605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.753900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.753966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.754245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.754309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.754557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.754624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.754880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.754946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.755199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.755263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.755555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.755619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.755858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.755923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.756168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.756231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.756461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.756525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.756774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.756852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.757108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.757195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.757514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.757609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.757885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.757962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.758245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.758310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.758550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.758613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.490 [2024-07-15 10:41:06.758874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.490 [2024-07-15 10:41:06.758940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.490 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.759183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.759248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.759481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.759548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.759776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.759872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.760157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.760221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.760499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.760563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.760818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.760884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.761087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.761155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.761387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.761450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.761687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.761754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.762008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.762072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.762348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.762411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.762696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.762759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.763025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.763093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.763385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.763448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.763692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.763755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.764066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.764131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.764412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.764476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.764718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.764782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.765056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.765119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.765402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.765466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.765676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.765741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.491 [2024-07-15 10:41:06.766001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.491 [2024-07-15 10:41:06.766069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.491 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.766371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.766435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.766690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.766757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.767022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.767087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.767365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.767429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.767709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.767772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.768045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.768119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.768365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.768431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.768671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.768736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.769035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.769101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.769384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.769448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.769712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.769775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.770038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.770105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.770401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.770465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.770743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.770823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.771089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.771164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.771402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.771465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.771744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.771840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.772108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.772171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.772372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.772437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.772682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.772748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.773020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.773089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.773333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.773399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.773599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.773662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.773917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.773986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.774267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.774332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.774542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.774608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.774875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.774940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.775177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.775241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.492 [2024-07-15 10:41:06.775458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.492 [2024-07-15 10:41:06.775522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.492 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.775733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.776099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.776163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.776397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.776708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.776771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.777029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.777094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.777379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.777443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.777651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.777718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.777973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.778037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.778286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.778349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.778599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.778663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.778913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.778981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.779242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.779305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.779522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.779587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.779787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.779877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.780155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.780219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.780466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.780532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.780784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.780863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.781145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.781209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.781454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.781517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.781798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.781883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.782124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.782187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.782445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.782508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.782719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.782783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.783009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.783074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.783332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.783395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.783659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.783733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.784045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.784114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.784366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.784433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.784652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.784716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.785025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.785091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.785337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.493 [2024-07-15 10:41:06.785401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.493 qpair failed and we were unable to recover it. 00:24:18.493 [2024-07-15 10:41:06.785664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.785727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.785954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.786022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.786207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.786271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.786522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.786588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.786853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.786919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.787188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.787252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.787461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.787525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.787769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.787868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.788181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.788246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.788489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.788551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.788777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.788857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.789114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.789175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.789419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.789484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.789763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.789843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.790095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.790158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.790420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.790483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.790713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.790780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.791042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.791106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.791347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.791412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.791656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.791717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.792024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.792095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.792392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.792459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.792699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.792763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.792983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.793049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.793305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.793368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.793659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.793725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.793984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.794049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.794288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.794355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.494 [2024-07-15 10:41:06.794644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.494 [2024-07-15 10:41:06.794707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.494 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.794941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.795295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.795365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.795658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.795722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.796025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.796090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.796376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.796447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.796737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.796834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.797056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.797123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.797376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.797441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.797666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.797729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.798029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.798096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.798342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.798408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.798627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.798695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.798957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.799024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.799229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.799293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.799571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.799642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.799943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.800010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.800253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.800321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.800616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.800680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.800921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.800988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.801226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.801296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.801509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.801576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.801836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.801902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.802184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.802248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.802495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.802568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.802871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.802938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.803181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.803247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.803530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.803594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.803841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.803912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.804193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.804263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.804557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.804620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.495 [2024-07-15 10:41:06.804835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.495 [2024-07-15 10:41:06.804902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.495 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.805110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.805173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.805428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.805495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.805787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.805867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.806070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.806138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.806345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.806412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.806670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.806736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.806970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.807039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.807333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.807399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.807682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.807746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.808069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.808136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.808385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.808451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.808743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.808835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.809114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.809179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.809414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.809477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.809699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.809773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.810054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.810125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.810381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.810446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.810733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.810798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.811061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.811125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.811374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.811445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.811708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.811776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.812051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.812118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.812332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.812403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.812688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.812757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.813026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.813091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.813288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.813352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.813586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.813653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.813954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.814022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.814286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.814350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.814592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.814657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.496 qpair failed and we were unable to recover it. 00:24:18.496 [2024-07-15 10:41:06.814944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.496 [2024-07-15 10:41:06.815010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.815264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.815332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.815578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.815642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.815906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.815972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.816214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.816279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.816519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.816589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.816895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.816966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.817229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.817505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.817568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.817774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.817855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.818129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.818199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.818414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.818481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.818772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.818852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.819144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.819208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.819501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.819565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.819880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.819948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.820210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.820275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.820554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.820620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.820830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.820898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.821101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.821175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.821427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.821494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.821753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.821856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.822084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.822148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.822402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.822466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.822724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.822829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.823086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.823152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.497 qpair failed and we were unable to recover it. 00:24:18.497 [2024-07-15 10:41:06.823401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.497 [2024-07-15 10:41:06.823467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.823765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.823867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.824110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.824183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.824469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.824538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.824819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.824887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.825092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.825159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.825420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.825485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.825731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.825821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.826096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.826161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.826409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.826473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.826758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.826841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.827122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.827194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.827462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.827532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.827788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.827886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.828087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.828152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.828431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.828495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.828785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.828876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.829096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.829161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.829403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.829471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.829759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.829843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.830034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.830098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.830357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.830425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.830670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.830735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.831004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.831068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.831319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.831386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.831656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.831723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.831999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.832067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.832348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.832412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.832656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.832720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.832955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.833023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.833296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.833361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.833616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.833681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.833934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.834000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.498 [2024-07-15 10:41:06.834239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.498 [2024-07-15 10:41:06.834305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.498 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.834598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.834664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.834956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.835023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.835320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.835384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.835642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.835705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.836019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.836096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.836361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.836426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.836712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.836776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.837042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.837106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.837369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.837432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.837642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.837714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.837984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.838050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.838304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.838368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.838624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.838691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.838942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.839008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.839215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.839283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.839586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.839651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.839865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.839933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.840226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.840292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.840563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.840636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.840889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.840955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.841251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.841316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.841559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.841626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.841969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.842037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.842326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.842392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.842585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.842647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.842908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.842975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.843224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.843290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.843571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.843642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.843905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.843971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.844222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.844287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.844486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.844550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.499 [2024-07-15 10:41:06.844818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.499 [2024-07-15 10:41:06.844891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.499 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.845142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.845461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.845526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.845746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.845823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.846028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.846096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.846355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.846399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.846538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.846574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.846744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.846779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.846939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.846972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.847133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.847166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.847290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.847324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.847431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.847465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.847624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.847657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.847820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.847854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.848019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.848052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.848209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.848241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.848412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.848444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.848555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.848589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.848813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.848885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.848998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.849031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.849167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.849200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.849337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.849370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.849562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.849627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.849860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.849894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.850029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.850062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.850193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.850224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.850350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.850382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.850545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.850576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.850675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.850710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.500 qpair failed and we were unable to recover it. 00:24:18.500 [2024-07-15 10:41:06.850848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.500 [2024-07-15 10:41:06.850881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.851015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.851047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.851206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.851238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.851345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.851377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.851512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.851794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.851868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.852045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.852078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.852182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.852214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.852358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.852411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.852608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.852662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.852799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.852839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.852998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.853036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.853226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.853284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.853460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.853515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.853649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.853680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.853817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.853849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.853975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.854023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.854216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.854271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.854451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.854506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.854614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.854644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.854770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.854806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.854935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.854986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.855093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.855123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.855278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.855309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.855436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.855467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.855608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.855639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.855732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.855763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.855873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.855903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.856016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.856045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.856180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.856210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.856306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.856337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.856437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.856468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.856584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.501 [2024-07-15 10:41:06.856616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.501 qpair failed and we were unable to recover it. 00:24:18.501 [2024-07-15 10:41:06.856728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.856757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.856896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.856927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.857019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.857049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.857208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.857239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.857400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.857432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.857544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.857579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.857706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.857736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.857979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.858013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.858169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.858222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.858422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.858479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.858588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.858618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.858729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.858760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.858957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.858989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.859118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.859150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.859299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.859351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.859482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.859512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.859604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.859635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.859793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.859830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.859962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.859992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.860090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.860121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.860250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.860281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.860414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.860444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.860554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.860584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.860743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.860774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.502 [2024-07-15 10:41:06.860887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.502 [2024-07-15 10:41:06.860918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.502 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.861868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.861900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.862033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.862064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.862215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.862245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.862375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.862405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.862533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.862563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.862694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.862725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.862893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.862955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.863090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.863120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.863283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.863313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.863460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.863491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.863649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.863679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.863795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.863831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.864052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.864110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.864266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.864321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.864435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.864465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.864605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.864636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.864764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.864795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.864915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.864945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.865073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.865104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.865210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.865240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.865367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.865398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.865558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.865589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.865696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.865728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.503 qpair failed and we were unable to recover it. 00:24:18.503 [2024-07-15 10:41:06.865895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.503 [2024-07-15 10:41:06.865928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.866061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.866092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.866224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.866255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.866412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.866443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.866534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.866564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.866733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.866763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.866954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.866987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.867178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.867229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.867331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.867362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.867495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.867526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.867656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.867688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.867813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.867844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.868031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.868064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.868281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.868336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.868474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.868505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.868645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.868676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.868819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.868849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.868985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.869136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.869282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.869426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.869591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.869759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.869936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.869966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.870135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.870188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.870347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.870379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.870512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.870543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.870701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.870731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.870896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.870950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.871091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.504 [2024-07-15 10:41:06.871146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.504 qpair failed and we were unable to recover it. 00:24:18.504 [2024-07-15 10:41:06.871396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.871449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.871552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.871584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.871709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.871739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.871877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.871910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.872046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.872077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.872212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.872242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.872382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.872413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.872546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.872578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.872713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.872744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.872892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.872923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.873055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.873085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.873187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.873218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.873378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.873409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.873541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.873572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.873740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.873771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.873897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.873930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.874072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.874107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.874268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.874298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.874432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.874464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.874595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.874624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.874757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.874787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.874936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.874968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.875082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.875113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.875249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.875278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.875410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.875440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.875578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.875608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.875743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.875774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.875920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.875952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.876089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.876119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.876256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.876287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.505 [2024-07-15 10:41:06.876447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.505 [2024-07-15 10:41:06.876478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.505 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.876589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.876619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.876762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.876792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.876931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.876961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.877100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.877130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.877273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.877438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.877469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.877629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.877659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.877796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.877837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.877997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.878028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.878187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.878216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.878360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.878390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.878527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.878557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.878667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.878702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.878853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.878885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.879018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.879049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.879190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.879220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.879359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.879390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.879521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.879552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.879660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.879690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.879850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.879882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.880016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.880047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.880151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.880182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.880315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.880348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.880506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.880537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.880667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.880698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.880847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.880879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.881014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.881044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.881144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.881174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.881307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.881337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.506 qpair failed and we were unable to recover it. 00:24:18.506 [2024-07-15 10:41:06.881472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.506 [2024-07-15 10:41:06.881502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.881664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.881695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.881836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.881867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.881975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.882111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.882275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.882445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.882580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.882754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.882936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.882968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.883105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.883136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.883286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.883316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.883460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.883491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.883619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.883650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.883785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.883834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.884035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.884199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.884390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.884561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.884700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.884869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.884973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.885112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.885231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.885386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.885562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.885721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.885918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.885949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.886081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.886112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.886241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.886272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.886374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.886404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.507 [2024-07-15 10:41:06.886507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.507 [2024-07-15 10:41:06.886538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.507 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.886694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.886724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.886886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.886917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.887079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.887132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.887288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.887346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.887448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.887478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.887617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.887647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.887751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.887781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.887944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.887996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.888140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.888193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.888359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.888391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.888525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.888557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.888696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.888726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.888916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.888969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.889122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.889181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.889319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.889369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.889505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.889537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.889649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.889679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.889858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.889891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.890017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.890048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.890208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.890243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.890402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.890432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.890561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.890592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.890752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.890781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.508 [2024-07-15 10:41:06.890924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.508 [2024-07-15 10:41:06.890987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.508 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.891154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.891207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.891372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.891420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.891562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.891593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.891724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.891754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.891898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.891929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.892060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.892091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.892219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.892249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.892385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.892416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.892574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.892604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.892718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.892748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.892858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.892890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.893021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.893051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.893149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.893181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.893312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.893342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.893507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.893538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.893651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.893682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.893841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.893873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.894061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.894229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.894370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.894533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.894672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.894828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.894969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.895095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.895238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.895404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.895537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.895727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.895874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.896041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.896072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.896232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.509 [2024-07-15 10:41:06.896263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.509 qpair failed and we were unable to recover it. 00:24:18.509 [2024-07-15 10:41:06.896409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.896439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.896571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.896602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.896736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.896766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.896932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.896985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.897160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.897191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.897327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.897357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.897492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.897522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.897667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.897697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.897822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.897854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.897965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.897996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.898137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.898169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.898300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.898330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.898438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.898469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.898577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.898607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.898729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.898759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.898899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.898931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.899063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.899094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.899201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.899232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.899373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.899405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.899561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.899592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.899728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.899759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.899932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.899964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.900071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.900101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.900230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.900261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.900404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.900435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.900593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.900622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.900725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.900756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.900884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.900914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.510 [2024-07-15 10:41:06.901020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.510 [2024-07-15 10:41:06.901050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.510 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.901185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.901214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.901320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.901349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.901463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.901493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.901595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.901624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.901749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.901780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.901916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.901946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.902116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.902275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.902442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.902589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.902715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.902852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.902972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.903025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.903162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.903194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.903353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.903384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.903497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.903528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.903648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.903679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.903817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.903848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.904006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.904058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.904216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.904246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.904379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.904410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.904534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.904565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.904702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.904733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.904861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.904920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.905090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.905149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.905349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.905400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.905510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.905541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.905704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.905734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.905833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.905865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.906045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.906117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.906278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.906368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.511 [2024-07-15 10:41:06.906495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.511 [2024-07-15 10:41:06.906525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.511 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.906692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.906723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.906880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.906942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.907103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.907151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.907314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.907373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.907528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.907559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.907676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.907708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.907823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.907856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.908053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.908107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.908257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.908309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.908466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.908496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.908654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.908833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.908863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.909023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.909069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.909260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.909323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.909485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.909515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.909658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.909797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.909841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.910018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.910084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.910271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.910506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.910537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.910641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.910671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.910838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.910869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.911059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.911252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.911393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.911557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.911688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.911876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.911989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.512 [2024-07-15 10:41:06.912021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.512 qpair failed and we were unable to recover it. 00:24:18.512 [2024-07-15 10:41:06.912124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.912154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.912256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.912288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.912421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.912451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.912553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.912584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.912705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.912735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.912888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.912918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.913051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.913081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.913210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.913239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.913397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.913427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.913571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.913602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.913737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.913766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.913981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.914181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.914324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.914491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.914625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.914784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.914960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.914990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.915150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.915181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.915292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.915322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.915452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.915483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.915606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.915637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.915772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.915817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.915935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.915966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.916095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.916127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.916261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.916291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.916421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.916453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.916583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.916614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.916750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.916781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.916928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.916958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.917091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.513 [2024-07-15 10:41:06.917123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.513 qpair failed and we were unable to recover it. 00:24:18.513 [2024-07-15 10:41:06.917255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.917286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.917390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.917420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.917563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.917594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.917723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.917752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.917936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.917968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.918103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.918241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.918397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.918528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.918696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.918889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.918996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.919130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.919268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.919426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.919564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.919758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.919935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.919966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.920100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.920132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.920251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.920283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.920441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.920471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.920562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.920593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.920722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.920752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.920867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.920898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.921037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.514 [2024-07-15 10:41:06.921067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.514 qpair failed and we were unable to recover it. 00:24:18.514 [2024-07-15 10:41:06.921172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.921202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.921315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.921348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.921476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.921507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.921676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.921707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.921833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.921865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.921988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.922129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.922288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.922458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.922612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.922750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.922926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.922958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.923099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.923129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.923244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.923274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.923381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.923412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.923557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.923586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.923746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.923777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.923891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.923923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.924051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.924081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.924193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.924224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.924356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.924387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.924518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.924549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.924654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.924683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.924797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.924837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.925002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.925033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.925164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.925194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.925328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.925358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.925493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.925524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.925684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.925714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.515 qpair failed and we were unable to recover it. 00:24:18.515 [2024-07-15 10:41:06.925851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.515 [2024-07-15 10:41:06.925882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.926009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.926041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.926171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.926201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.926331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.926361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.926520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.926551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.926686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.926720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.926883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.926914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.927047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.927079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.927214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.927244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.927375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.927405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.927567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.927598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.927726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.927755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.927918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.927949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.928054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.928087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.928222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.928252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.928358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.928387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.928527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.928558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.928688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.928717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.928843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.928874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.929939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.929968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.930105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.930135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.930242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.930272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.930388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.930417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.516 [2024-07-15 10:41:06.930580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.516 [2024-07-15 10:41:06.930610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.516 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.930740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.930769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.930918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.930949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.931084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.931120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.931279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.931309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.931439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.931469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.931629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.931660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.931796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.931837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.931973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.932138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.932307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.932463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.932610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.932774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.932926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.932957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.933059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.933089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.933221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.933250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.933381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.933412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.933567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.933596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.933706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.933736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.933897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.933929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.934038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.934068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.934172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.934201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.934330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.934360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.934495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.934525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.934654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.934683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.934824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.935023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.935054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.935221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.935252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.935384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.935415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.935550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.517 [2024-07-15 10:41:06.935580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.517 qpair failed and we were unable to recover it. 00:24:18.517 [2024-07-15 10:41:06.935694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.935724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.935828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.935859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.936000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.936049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.936206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.936259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.936386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.936416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.936520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.936550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.936712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.936742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.936872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.936903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.937001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.937032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.937188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.937236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.937370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.937400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.937568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.937599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.937737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.937767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.937878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.937910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.938017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.938049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.938212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.938262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.938396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.938425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.938583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.938614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.938720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.938751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.938954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.939005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.939217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.939274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.939378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.939411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.939520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.939551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.939652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.939683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.939787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.939830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.939984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.940035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.940170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.940201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.940335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.940365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.940463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.940492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.940594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.940624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.518 [2024-07-15 10:41:06.940723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.518 [2024-07-15 10:41:06.940752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.518 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.940873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.940905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.941966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.941998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.942106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.942135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.942266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.942300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.942443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.942474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.942611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.942642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.942815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.942846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.942939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.942970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.943078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.943108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.943252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.943282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.943389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.943418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.943523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.943553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.943682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.943712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.943851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.943883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.944055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.944188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.944358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.944522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.944686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.944860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.944997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.945028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.945122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.945151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.945280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.945311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.945446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.945476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.519 qpair failed and we were unable to recover it. 00:24:18.519 [2024-07-15 10:41:06.945618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.519 [2024-07-15 10:41:06.945647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.945778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.945832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.946053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.946183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.946329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.946454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.946620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.946790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.946991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.947042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.947177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.947225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.947357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.947413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.947552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.947581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.947750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.947780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.947925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.947957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.948089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.948119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.948255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.948284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.948444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.948475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.948636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.948667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.948823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.948854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.948992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.949039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.949195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.949246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.949380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.949410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.949540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.949572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.520 [2024-07-15 10:41:06.949739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.520 [2024-07-15 10:41:06.949770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.520 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.949919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.949972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.950122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.950171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.950304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.950354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.950465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.950496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.950655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.950685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.950787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.950826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.950984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.951015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.951155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.951185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.951291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.951321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.951453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.951488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.951630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.951660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.951765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.951796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.951977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.952028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.952178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.952227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.952361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.952392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.952506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.952537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.952665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.952695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.952790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.952832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.952970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.953115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.953273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.953442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.953576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.953744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.953945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.953975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.954104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.954135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.954291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.954321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.954430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.954460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.954562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.954592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.521 [2024-07-15 10:41:06.954727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.521 [2024-07-15 10:41:06.954756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.521 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.954888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.954920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.955924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.955955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.956083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.956114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.956211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.956242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.956373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.956403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.956559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.956589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.956719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.956749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.956871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.956902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.957950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.957985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.958143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.958173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.958277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.958308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.958416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.958446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.958575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.958605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.958762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.958792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.958911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.958942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.959076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.959107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.959220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.959249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.522 [2024-07-15 10:41:06.959389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.522 [2024-07-15 10:41:06.959420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.522 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.959543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.959573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.959702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.959732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.959866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.959897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.960950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.960981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.961114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.961274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.961434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.961591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.961720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.961878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.961976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.962106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.962284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.962436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.962624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.962791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.962965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.962995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.963124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.963153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.963291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.963320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.963423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.963453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.963558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.963590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.963751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.963782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.963950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.963980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.964089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.964120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.964248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.964280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.523 [2024-07-15 10:41:06.964413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.523 [2024-07-15 10:41:06.964443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.523 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.964580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.964612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.964712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.964742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.964877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.964908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.965942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.965972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.966100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.966131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.966261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.966290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.966424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.966458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.966591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.966622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.966754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.966785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.966910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.966941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.967076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.967106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.967263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.967294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.967424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.967453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.967552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.967584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.967713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.967742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.967878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.967909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.968962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.968992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.969125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.969156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.524 [2024-07-15 10:41:06.969260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.524 [2024-07-15 10:41:06.969290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.524 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.969422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.969451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.969550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.969581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.969738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.969769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.969887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.969918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.970080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.970111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.970241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.970272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.970399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.970429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.970587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.970617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.970709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.970739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.970888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.970947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.971091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.971141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.971302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.971333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.971468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.971498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.971663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.971694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.971798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.971835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.972017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.972067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.972202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.972252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.972354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.972384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.972516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.972545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.972676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.972706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.972837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.972868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.973000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.973031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.973171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.973202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.973316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.973346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.973444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.973473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.525 [2024-07-15 10:41:06.973592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.525 [2024-07-15 10:41:06.973623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.525 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.973750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.973781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.973900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.973930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.974953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.974984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.975116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.975146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.975317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.975350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.975479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.975509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.975655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.975685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.975784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.975826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.975986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.976123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.976284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.976435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.976569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.976702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.976904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.976936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.977923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.977953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.978084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.978115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.978242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.526 [2024-07-15 10:41:06.978271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.526 qpair failed and we were unable to recover it. 00:24:18.526 [2024-07-15 10:41:06.978404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.978436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.978550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.978581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.978744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.978774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.978929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.978960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.979069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.979203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.979232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.979391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.979422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.979586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.979616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.979783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.979823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.979963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.980163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.980330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.980467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.980606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.980768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.980950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.980982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.981114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.981144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.981269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.981438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.981469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.981602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.981632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.981745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.981781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.981928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.981959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.982090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.982119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.982253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.982284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.982446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.982476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.982590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.982620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.982754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.982785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.982938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.982968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.983125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.983155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.983263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.983296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.983436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.983466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.983582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.983612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.983724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.983754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.983903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.983934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.527 [2024-07-15 10:41:06.984094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.527 [2024-07-15 10:41:06.984125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.527 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.984228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.984259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.984355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.984387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.984557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.984588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.984722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.984753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.984894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.984925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.985063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.985232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.985389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.985556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.985711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.985990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.986021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.986182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.986213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.986389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.986420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.986543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.986573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.986711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.986741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.986852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.986884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.986979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.987009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.987162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.987193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.987299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.987329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.987486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.987515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.987625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.987656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.987813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.987843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.987994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.988041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.988172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.988223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.988311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.988341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.988466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.988516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.988657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.988692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.988836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.988890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.989050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.989090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.989277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.989318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.989452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.989492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.528 [2024-07-15 10:41:06.989626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.528 [2024-07-15 10:41:06.989703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.528 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.989901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.989935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.990092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.990158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.990318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.990384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.990629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.990693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.990891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.990924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.991063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.991095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.991233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.991273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.991407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.991440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.991539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.991571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.991706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.991746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.991914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.991948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.992093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.992125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.992263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.992297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.992450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.992488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.992615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.992654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.992820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.992873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.993016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.993049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.993209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.993242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.993378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.993411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.993546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.993579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.993745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.993777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.993922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.993956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.994096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.994127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.994274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.994323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.994477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.994525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.994681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.994712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.994846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.994879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.995099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.995161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.995381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.995435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.995651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.995706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.995818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.995848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.996006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.996056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.996193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.996243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.996365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.996419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.996557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.996587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.529 qpair failed and we were unable to recover it. 00:24:18.529 [2024-07-15 10:41:06.996717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.529 [2024-07-15 10:41:06.996748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.996899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.996950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.997135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.997184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.997384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.997435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.997537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.997567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.997690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.997720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.997846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.997877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.998008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.998039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.998205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.998235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.998393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.998444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.998577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.998607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.998738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.998768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.998897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.998928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.999069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.999212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.999373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.999527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.999691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:06.999859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:06.999997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.000133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.000278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.000444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.000586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.000724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.000896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.000931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.001968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.001999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.002110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.002140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.002276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.002307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.002442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.002472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.002598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.530 [2024-07-15 10:41:07.002628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.530 qpair failed and we were unable to recover it. 00:24:18.530 [2024-07-15 10:41:07.002762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.002792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.002911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.002942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.003073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.003103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.003242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.003272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.003443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.003473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.003632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.003663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.003819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.003851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.004032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.004085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.004271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.004321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.004453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.004484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.004644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.004675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.004775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.004825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.004961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.005010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.005163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.005213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.005363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.005413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.005513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.005542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.005677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.005707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.005812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.005842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.005970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.006107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.006272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.006409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.006567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.006710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.006879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.006911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.007071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.007102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.007211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.007241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.007380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.007410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.007517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.531 [2024-07-15 10:41:07.007548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.531 qpair failed and we were unable to recover it. 00:24:18.531 [2024-07-15 10:41:07.007693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.007724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.007883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.007914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.008070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.008121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.008279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.008317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.008498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.008529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.008657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.008687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.008826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.008870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.009013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.009076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.009229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.009285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.009443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.009473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.009582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.009625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.009774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.009824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.009951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.009983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.010131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.010163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.010307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.010338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.010475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.010505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.010677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.010855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.010900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.011092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.011228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.011366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.011566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.011710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.011877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.011984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.012014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.012112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.012142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.012245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.012276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.012437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.012477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.012630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.012662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.012776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.012827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.012972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.013006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.532 [2024-07-15 10:41:07.013147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.532 [2024-07-15 10:41:07.013178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.532 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.013309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.013341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.013449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.013480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.013614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.013644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.013749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.013779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.013930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.013963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.014069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.014099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.014191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.014221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.014324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.014359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.014500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.014543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.014703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.014752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.014920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.014964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.015079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.015111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.015262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.015294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.015399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.015431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.015544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.015576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.015712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.015743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.015858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.015892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.016047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.016083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.016231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.016266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.016419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.016454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.016559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.016745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.016776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.016921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.016953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.017054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.017084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.017191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.017225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.017375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.017412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.017577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.813 [2024-07-15 10:41:07.017640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.813 qpair failed and we were unable to recover it. 00:24:18.813 [2024-07-15 10:41:07.017822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.017871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.018935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.018967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.019080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.019117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.019257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.019289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.019446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.019482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.019603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.019651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.019784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.019845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.019977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.020009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.020147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.020178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.020342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.020373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.020511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.020544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.020666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.020697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.020825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.020858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.020973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.021005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.021137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.021168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.021302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.021335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.021497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.021534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.021700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.021731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.021844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.021876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.022040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.022167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.022331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.022502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.022666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.022828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.022988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.023120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.023258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.023441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.023624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.023841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.023962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.023993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.024093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.024143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.024287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.024323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.024447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.024483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.024628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.024665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.024783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.024830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.024966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.024999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.025108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.025140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.025255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.025287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.025443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.025478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.025638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.025673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.025786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.025848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.025962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.025994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.026119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.026150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.026278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.026310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.026414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.026445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.026577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.026609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.026745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.026776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.026923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.026955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.027109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.027140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.027275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.027308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.027462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.027534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.027684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.027720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.027873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.027905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.028012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.028044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.028187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.028220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.028346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.028377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.028507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.028538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.028669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.028701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.814 qpair failed and we were unable to recover it. 00:24:18.814 [2024-07-15 10:41:07.028842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.814 [2024-07-15 10:41:07.028879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.029034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.029198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.029338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.029529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.029696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.029891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.029999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.030031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.030161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.030193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.030300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.030336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.030470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.030500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.030634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.030667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.030840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.030872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.030971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.031003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.031167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.031198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.031330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.031361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.031513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.031550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.031668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.031704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.031839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.031872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.032932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.032963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.033096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.033129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.033232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.033265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.033401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.033433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.033544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.033575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.033685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.033716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.033849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.033881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.034048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.034083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.034233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.034269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.034387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.034423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.034573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.034609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.034787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.034833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.034955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.034991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.035167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.035203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.035315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.035350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.035538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.035574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.035718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.035755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.035888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.035927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.036070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.036107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.036229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.036265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.036393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.036429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.036573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.036609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.036719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.036755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.036900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.036938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.037083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.037125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.037279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.037316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.037477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.037512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.037731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.037789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.037939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.037976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.038103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.038139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.038288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.038324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.038512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.038548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.038686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.038750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.038927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.038965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.039111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.039147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.039300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.039336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.039513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.039550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.039665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.039702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.039848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.039886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.039998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.040036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.040166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.815 [2024-07-15 10:41:07.040202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.815 qpair failed and we were unable to recover it. 00:24:18.815 [2024-07-15 10:41:07.040327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.040362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.040487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.040522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.040642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.040677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.040807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.040845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.040967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.041004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.041156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.041192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.041349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.041384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.041529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.041564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.041686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.041723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.041836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.041873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.042055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.042091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.042228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.042264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.042381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.042416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.042569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.042605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.042797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.042845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.043007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.043045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.043229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.043267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.043388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.043442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.043586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.043621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.043799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.043847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.044011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.044049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.044203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.044240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.044384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.044421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.044606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.044650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.044783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.044854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.045002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.045039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.045157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.045193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.045361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.045396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.045542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.045578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.045727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.045764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.045963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.046002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.046151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.046189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.046343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.046381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.046566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.046604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.046734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.046772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.046915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.046952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.047089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.047125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.047241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.047276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.047439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.047476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.047659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.047697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.047820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.047859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.047988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.048042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.048222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.048273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.048431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.048469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.048667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.048728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.048936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.048973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.049122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.049157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.049337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.049390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.049574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.049611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.049752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.049788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.049950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.049988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.050112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.050148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.050341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.050378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.050575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.050611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.050748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.050783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.050936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.050976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.816 qpair failed and we were unable to recover it. 00:24:18.816 [2024-07-15 10:41:07.051159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.816 [2024-07-15 10:41:07.051195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.051337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.051372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.051520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.051556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.051709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.051745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.051938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.051975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.052099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.052135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.052308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.052344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.052468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.052511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.052649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.052687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.052849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.052889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.053039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.053074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.053250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.053303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.053485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.053523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.053706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.053743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.053918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.053955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.054117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.054170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.054359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.054395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.054570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.054606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.054717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.054751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.054869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.054907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.055048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.055086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.055247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.055285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.055447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.055484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.055643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.055681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.055791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.055838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.055966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.056004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.056149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.056188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.056342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.056379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.056561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.056599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.056730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.056769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.056964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.057023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.057221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.057261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.057393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.057430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.057570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.057606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.057735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.057773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.057964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a60e0 is same with the state(5) to be set 00:24:18.817 [2024-07-15 10:41:07.058219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.058278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.058427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.058470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.058710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.058774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.058947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.058988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.059135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.059172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.059372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.059410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.059561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.059625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.059816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.059855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.059990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.060028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.060157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.060195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.060348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.060386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.060543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.060582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.060749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.060788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.060944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.060983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.061185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.061222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.061416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.061454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.061618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.061654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.061816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.061853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.062004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.062042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.062170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.062206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.062384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.062437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.062619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.062657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.062815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.062852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.062995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.063031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.063210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.063249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.063404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.063643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.063684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.063884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.063926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.064060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.064100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.064233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.064274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.064437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.064477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.064644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.064680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.064829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.817 [2024-07-15 10:41:07.064884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.817 qpair failed and we were unable to recover it. 00:24:18.817 [2024-07-15 10:41:07.065069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.065107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.065277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.065315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.065438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.065475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.065659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.065695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.065850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.065890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.066046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.066085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.066216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.066268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.066407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.066442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.066582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.066618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.066737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.066773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.066963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.067019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.067165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.067388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.067430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.067596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.067639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.067836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.067877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.068034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.068071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.068222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.068277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.068437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.068477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.068618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.068658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.068845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.068906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.069039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.069081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.069225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.069266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.069428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.069467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.069626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.069666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.069846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.069884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.070039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.070074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.070224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.070258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.070432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.070471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.070611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.070672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.070832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.070870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.070995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.071031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.071171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.071208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.071359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.071396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.071538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.071578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.071713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.071753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.071941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.071979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.072095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.072134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.072305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.072346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.072469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.072509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.072675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.072715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.072905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.072948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.073077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.073117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.073279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.073320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.073458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.073499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.073611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.073651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.073795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.073841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.073998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.074036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.074166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.074202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.074382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.074422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.074581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.074622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.074788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.074839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.075001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.075043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.075221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.075257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.075406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.075444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.075596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.075633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.075868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.075910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.076077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.076117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.076269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.076310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.076486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.076523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.818 [2024-07-15 10:41:07.076666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.818 [2024-07-15 10:41:07.076709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.818 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.076847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.076885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.077029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.077065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.077195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.077234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.077382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.077423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.077632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.077668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.077821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.077876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.078035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.078076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.078245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.078286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.078425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.078461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.078575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.078611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.078758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.078794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.079002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.079038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.079146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.079182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.079374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.079427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.079607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.079644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.079858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.079895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.080093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.080136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.080284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.080320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.080428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.080465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.080687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.080722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.080874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.080936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.081088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.081131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.081309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.081345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.081487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.081523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.081640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.081676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.081857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.081895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.082064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.082108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.082258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.082298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.082456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.082498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.082654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.082694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.082862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.082918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.083113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.083154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.083359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.083395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.083511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.083554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.083727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.083769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.083905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.083947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.084117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.084158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.084388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.084431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.084612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.084651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.084833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.084885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.085033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.085075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.085213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.085255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.085480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.085523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.085691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.085736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.085892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.085928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.086069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.086125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.086334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.086376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.086600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.086640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.086811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.086867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.086986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.087022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.087167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.087204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.087328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.087365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.087521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.087559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.087755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.087795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.087947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.087982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.088123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.088158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.088273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.088308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.088481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.088520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.088679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.088721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.088900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.088935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.089074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.089108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.089280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.089320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.089465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.089504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.089646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.089721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.819 qpair failed and we were unable to recover it. 00:24:18.819 [2024-07-15 10:41:07.089920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.819 [2024-07-15 10:41:07.089955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.090127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.090169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.090325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.090365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.090501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.090538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.090654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.090711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.090926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.090961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.091138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.091181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.091373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.091413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.091612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.091678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.091859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.091894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.092019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.092054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.092265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.092335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.092600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.092652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.092877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.092912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.093032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.093066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.093248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.093294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.093453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.093493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.093656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.093696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.093849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.093885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.094026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.094060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.094229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.094268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.094384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.094424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.094544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.094586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.094787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.094856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.094996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.095030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.095225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.095264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.095416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.095456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.095594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.095634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.095810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.095845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.095964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.095999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.096155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.096190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.096364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.096416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.096549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.096606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.096783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.096836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.096980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.097016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.097197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.097240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.097405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.097449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.097618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.097695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.097891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.097935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.098110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.098152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.098359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.098399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.098598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.098638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.098823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.098858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.099030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.099083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.099222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.099262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.099437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.099471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.099616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.099651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.099850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.099890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.100021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.100061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.100219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.100260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.100437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.100479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.100602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.100644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.100773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.100827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.101029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.101072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.101273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.101315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.101447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.101495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.101693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.101736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.101923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.101959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.102078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.102112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.102276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.102318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.102441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.102483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.102682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.102724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.102881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.820 [2024-07-15 10:41:07.102925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.820 qpair failed and we were unable to recover it. 00:24:18.820 [2024-07-15 10:41:07.103094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.103136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.103334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.103377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.103530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.103564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.103704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.103738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.103867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.103902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.104083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.104127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.104319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.104361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.104557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.104600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.104726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.104769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.104966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.105002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.105151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.105186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.105410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.105444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.105592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.105627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.105796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.105851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.106027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.106061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.106176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.106211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.106390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.106433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.106656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.106718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.106927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.106970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.107178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.107213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.107385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.107440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.107571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.107613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.107787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.107840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.107978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.108021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.108188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.108229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.108349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.108391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.108534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.108578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.108755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.108799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.108996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.109031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.109179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.109213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.109387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.109429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.109637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.109699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.109860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.109910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.110081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.110125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.110329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.110370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.110534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.110576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.110743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.110786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.111032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.111095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.111253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.111321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.111469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.111511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.111709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.111751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.111896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.111941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.112124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.112166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.112365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.112408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.112651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.112715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.112959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.113003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.113151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.113194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.113327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.113371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.113516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.113596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.113824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.113869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.114022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.114066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.114247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.114426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.114694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.114729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.114840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.114878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.115034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.115076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.821 [2024-07-15 10:41:07.115214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.821 [2024-07-15 10:41:07.115256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.821 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.115416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.115459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.115595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.115782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.115836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.116046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.116081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.116224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.116260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.116441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.116477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.116591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.116627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.116812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.116868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.117005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.117047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.117197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.117240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.117376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.117419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.117592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.117635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.117840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.117883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.118029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.118071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.118248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.118284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.118437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.118478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.118692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.118735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.118892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.118965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.119125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.119196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.119433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.119475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.119689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.119723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.119851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.119885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.120036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.120079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.120260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.120295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.120413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.120447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.120635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.120685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.120843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.120887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.121011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.121054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.121179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.121222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.121403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.121446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.121646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.121689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.121901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.121944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.122102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.122144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.122305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.122349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.122552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.122595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.122737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.122781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.122946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.122989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.123141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.123183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.123353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.123395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.123510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.123553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.123695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.123738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.123889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.123933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.124069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.124113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.124290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.124332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.124475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.124517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.124649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.124689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.124861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.124905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.125060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.125103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.125269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.125313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.125506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.125549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.125747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.125818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.126069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.126128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.126386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.126445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.126647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.126690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.126857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.126901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.127100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.127149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.127283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.127326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.127493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.127536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.127742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.127845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.128031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.128073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.128211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.128255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.128400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.128442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.128591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.128634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.128816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.128864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.129010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.822 [2024-07-15 10:41:07.129052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.822 qpair failed and we were unable to recover it. 00:24:18.822 [2024-07-15 10:41:07.129220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.129263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.129437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.129480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.129609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.129651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.129783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.130022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.130065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.130240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.130282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.130461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.130503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.130703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.130744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.130962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.131005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.131144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.131188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.131326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.131370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.131544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.131587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.131717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.131761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.131983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.132027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.132204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.132246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.132408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.132450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.132590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.132633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.132823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.132867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.133043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.133085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.133242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.133284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.133454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.133497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.133627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.133670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.133851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.133895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.134033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.134076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.134242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.134285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.134419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.134463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.134622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.134665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.134838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.134881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.135055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.135097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.135276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.135319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.135518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.135568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.135748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.135791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.136002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.136252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.136295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.136471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.136513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.136711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.136753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.136914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.136956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.137112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.137154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.137324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.137366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.137505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.137549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.137680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.137723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.137877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.137921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.138093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.138138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.138267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.138311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.138495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.138538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.138682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.138726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.138916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.138959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.139102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.139146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.139347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.139389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.139562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.139604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.139744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.139785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.139946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.139989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.140187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.140229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.140389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.140432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.140610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.140652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.140851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.140893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.141066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.141109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.141283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.141325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.141456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.141499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.141659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.141702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.141861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.141905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.142104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.142147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.142315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.142356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.142564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.142606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.142819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.142863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.823 [2024-07-15 10:41:07.143008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.823 [2024-07-15 10:41:07.143050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.823 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.143222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.143264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.143447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.143505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.143745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.143788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.143976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.144018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.144177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.144225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.144390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.144432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.144573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.144617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.144821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.144865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.145035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.145077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.145246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.145288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.145492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.145534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.145659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.145701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.145822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.145866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.146070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.146112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.146281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.146323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.146498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.146540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.146666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.146708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.146904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.146946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.147129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.147171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.147314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.147553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.147595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.147730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.147772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.147935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.147977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.148151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.148195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.148329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.148372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.148533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.148575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.148706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.148748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.148971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.149014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.149182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.149224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.149391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.149432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.149638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.149681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.149863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.149907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.150077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.150119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.150289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.150333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.150535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.150775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.150831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.151037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.151079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.151219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.151262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.151504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.151546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.151721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.151763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.151960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.152020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.152240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.152282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.152434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.152494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.152668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.152713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.152897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.152952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.153141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.153187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.153345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.153387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.153567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.153610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.153754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.153797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.153974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.154019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.154195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.154254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.154415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.154457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.154607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.154650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.154772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.154823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.154973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.155016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.155141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.155183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.155369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.155414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.824 [2024-07-15 10:41:07.155642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.824 [2024-07-15 10:41:07.155684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.824 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.155823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.155996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.156037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.156196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.156243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.156419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.156464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.156628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.156674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.156850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.156896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.157033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.157077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.157290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.157335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.157545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.157590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.157767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.157819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.157982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.158027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.158203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.158250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.158460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.158505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.158742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.158787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.158983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.159027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.159206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.159266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.159430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.159472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.159646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.159704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.159945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.159992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.160179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.160223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.160397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.160443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.160577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.160621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.160769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.160824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.160999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.161044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.161184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.161230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.161442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.161488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.161625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.161669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.161853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.161899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.162038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.162084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.162259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.162305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.162488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.162533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.162742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.162786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.162984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.163028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.163204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.163249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.163426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.163473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.163682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.163727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.163940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.163986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.164183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.164226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.164385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.164427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.164621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.164667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.164864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.165106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.165155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.165353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.165396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.165586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.165633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.165862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.165906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.166123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.166165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.166331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.166373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.166513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.166556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.166685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.166728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.166902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.166960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.167184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.167232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.167423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.167470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.167688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.167737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.167971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.168027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.168272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.168314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.168481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.168543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.168698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.168745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.168987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.169036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.169255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.169302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.169443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.169489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.169723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.169766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.169923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.169990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.170137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.170186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.170372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.170422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.170609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.170657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.825 qpair failed and we were unable to recover it. 00:24:18.825 [2024-07-15 10:41:07.170871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.825 [2024-07-15 10:41:07.170920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.171090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.171138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.171349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.171392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.171542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.171585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.171780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.171865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.172083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.172131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.172318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.172365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.172551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.172601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.172791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.172862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.173015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.173063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.173261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.173309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.173490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.173539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.173699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.173748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.173964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.174014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.174164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.174212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.174422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.174471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.174637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.174686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.174859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.174910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.175081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.175131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.175307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.175356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.175580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.175627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.175839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.175889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.176086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.176135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.176352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.176399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.176600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.176654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.176835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.176883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.177048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.177095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.177289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.177336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.177481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.177535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.177696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.177743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.177952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.178001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.178196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.178243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.178424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.178471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.178661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.178708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.178920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.178969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.179140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.179188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.179408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.179647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.179695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.179866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.179915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.180073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.180120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.180321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.180369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.180595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.180642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.180867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.180916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.181067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.181116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.181314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.181362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.181537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.181585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.181797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.181864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.182024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.182075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.182282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.182332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.182539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.182591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.182828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.182888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.183086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.183137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.183368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.183420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.183589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.183640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.183844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.183896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.184151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.184203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.184398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.184449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.184673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.184725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.184942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.184993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.185175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.185226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.185434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.185484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.185665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.185716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.826 qpair failed and we were unable to recover it. 00:24:18.826 [2024-07-15 10:41:07.185969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.826 [2024-07-15 10:41:07.186021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.186192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.186245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.186442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.186699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.186749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.186956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.187009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.187211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.187264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.187458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.187517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.187704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.187759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.188000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.188053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.188210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.188261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.188422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.188473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.188687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.188742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.188977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.189031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.189202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.189252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.189457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.189507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.189709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.189760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.189977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.190030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.190197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.190250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.190459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.190511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.190740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.190797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.191019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.191070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.191243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.191295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.191527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.191577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.191841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.191894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.192103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.192155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.192316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.192366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.192599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.192649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.192846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.192899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.193112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.193163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.193323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.193375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.193549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.193599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.193793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.193855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.194067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.194117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.194330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.194597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.194647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.194890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.194945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.195118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.195172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.195369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.195420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.195646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.195701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.195923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.195979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.196187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.196241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.196474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.196529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.196725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.196776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.196984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.197036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.197235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.197285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.197475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.197525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.197699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.197758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.198040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.198092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.198293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.198344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.198523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.198574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.198829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.198900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.199108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.199159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.199356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.199409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.199608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.199660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.199885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.199942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.200164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.200219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.200402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.200457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.200708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.200762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.201046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.827 [2024-07-15 10:41:07.201097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.827 qpair failed and we were unable to recover it. 00:24:18.827 [2024-07-15 10:41:07.201296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.201349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.201555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.201606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.201798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.201861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.202078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.202134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.202353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.202407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.202580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.202636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.202845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.202902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.203145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.203200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.203426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.203481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.203690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.203744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.203989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.204044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.204247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.204302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.204553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.204607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.204825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.204880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.205106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.205161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.205377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.205431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.205605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.205658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.205835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.205890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.206058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.206115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.206339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.206394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.206603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.206658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.206875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.206933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.207101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.207159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.207379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.207435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.207640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.207694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.207902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.207959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.208148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.208206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.208393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.208458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.208645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.208699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.208915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.208972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.209161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.209215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.209461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.209516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.209728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.209781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.210028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.210083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.210272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.210325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.210531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.210585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.210772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.210852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.211070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.211125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.211325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.211378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.211593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.211647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.211886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.211942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.212164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.212219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.212472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.212526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.212743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.212798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.213066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.213122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.213332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.213386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.213594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.213647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.213857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.213914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.214111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.214166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.214406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.214460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.214633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.214689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.214882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.214939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.215186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.215241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.215466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.215520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.215771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.215842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.216068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.216123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.216371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.216426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.216606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.216660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.216912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.216968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.217232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.217287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.217530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.217584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.217773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.217841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.218121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.218194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.828 [2024-07-15 10:41:07.218464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.828 [2024-07-15 10:41:07.218535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.828 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.218793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.218859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.219115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.219187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.219400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.219455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.219659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.219722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.219999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.220054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.220271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.220328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.220584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.220639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.220889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.220946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.221182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.221237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.221425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.221479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.221651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.221708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.221988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.222062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.222302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.222375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.222617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.222671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.222898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.222974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.223208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.223281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.223490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.223545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.223815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.223872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.224080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.224154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.224399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.224472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.224652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.224706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.224932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.225005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.225288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.225343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.225545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.225600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.225775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.226053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.226109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.226303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.226358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.226570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.226627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.226901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.226977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.227222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.227277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.227466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.227521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.227771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.227847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.228059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.228114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.228361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.228415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.228646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.228700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.228964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.229037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.229277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.229349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.229604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.229657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.229932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.230004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.230256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.230327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.230577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.230630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.230842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.230899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.231112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.231188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.231378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.231458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.231678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.231733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.232031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.232104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.232374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.232445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.232659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.232713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.233095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.233153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.233366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.233420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.233602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.233656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.233875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.233929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.234199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.234273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.234451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.234507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.234680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.234737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.235015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.829 [2024-07-15 10:41:07.235070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.829 qpair failed and we were unable to recover it. 00:24:18.829 [2024-07-15 10:41:07.235319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.235399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.235640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.235694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.235938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.236012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.236237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.236309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.236471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.236524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.236707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.236764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.237011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.237085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.237320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.237391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.237568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.237625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.237825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.237881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.238137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.238209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.238405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.238460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.238676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.238729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.238957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.239029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.239273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.239344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.239584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.239637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.239853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.239910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.240121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.240176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.240373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.240453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.240698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.240752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.240953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.241029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.241232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.241305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.241516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.241570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.241785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.241856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.242066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.242121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.242393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.242465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.242672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.242729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.242974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.243056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.243307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.243379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.243591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.243646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.243893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.243969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.244151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.244205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.244382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.244436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.244673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.244726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.244948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.245004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.245207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.245261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.245426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.245480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.245690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.245744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.246015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.246070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.246287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.246358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.246571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.246628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.246861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.246917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.247127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.247184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.247366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.247420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.247640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.247694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.247920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.247993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.248231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.248303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.248517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.248572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.248777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.248844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.249079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.249153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.249403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.249459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.249705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.250036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.250108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.250337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.250410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.250635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.250690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.250961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.251036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.251273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.251346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.251534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.251591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.251848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.251904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.252145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.252219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.252517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.252592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.252819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.252875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.253095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.253149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.253429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.253502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.253743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.253797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.254032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.254086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.254362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.830 [2024-07-15 10:41:07.254436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.830 qpair failed and we were unable to recover it. 00:24:18.830 [2024-07-15 10:41:07.254677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.254759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.255039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.255094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.255361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.255418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.255674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.255730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.255951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.256029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.256307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.256379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.256629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.256686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.256910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.256986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.257223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.257298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.257536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.257610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.257814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.257870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.258151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.258225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.258443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.258516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.258723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.258779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.259085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.259166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.259442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.259515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.259688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.259742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.260009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.260083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.260292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.260368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.260588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.260643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.260835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.260891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.261167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.261239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.261497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.261551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.261789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.261856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.262129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.262203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.262441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.262513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.262752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.262817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.263067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.263142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.263377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.263660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.263715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.264014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.264088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.264308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.264362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.264568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.264622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.264793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.264866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.265163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.265238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.265493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.265550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.265763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.265833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.266128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.266201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.266453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.266525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.266732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.266786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.267091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.267178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.267422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.267496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.267714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.267770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.268071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.268146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.268430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.268503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.268690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.268745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.269005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.269079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.269283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.269357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.269570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.269626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.269842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.269900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.270156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.270212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.270408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.270463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.270700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.270755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.270994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.271068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.271238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.271296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.271537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.271591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.271734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.271788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.272014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.272091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.272338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.272408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.272623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.272678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.272902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.272976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.273224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.273296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.273497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.831 [2024-07-15 10:41:07.273552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.831 qpair failed and we were unable to recover it. 00:24:18.831 [2024-07-15 10:41:07.273748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.273818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.274009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.274065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.274309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.274382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.274588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.274642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.274835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.274892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.275150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.275205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.275415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.275491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.275682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.275738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.276049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.276124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.276378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.276452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.276694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.276748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.277046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.277120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.277367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.277440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.277667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.277721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.277977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.278051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.278289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.278361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.278607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.278663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.278934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.279018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.279316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.279387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.279601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.279656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.279838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.279894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.280127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.280199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.280481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.280553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.280771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.280840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.281073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.281148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.281432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.281504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.281759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.281824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.282064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.282139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.282372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.282443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.282659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.282713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.282908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.282985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.283249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.283321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.283578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.283651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.283830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.283887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.284129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.284202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.284460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.284493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.284639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.284679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.284829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.284864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.285009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.285047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.285171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.285212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.285355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.285390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.285537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.285722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.285757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.285895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.285941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.286142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.286192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.286317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.286361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.286506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.286541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.286686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.286719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.286869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.286907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.287031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.287065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.287183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.287218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.287356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.287389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.287556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.287594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.287713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.287747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.287903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.287949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.288100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.288150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.832 qpair failed and we were unable to recover it. 00:24:18.832 [2024-07-15 10:41:07.288324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.832 [2024-07-15 10:41:07.288358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.288499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.288533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.288656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.288689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.288814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.288848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.288990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.289150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.289293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.289443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.289607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.289780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.289936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.289968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.290084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.290117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.290226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.290260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.290371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.290401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.290536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.290569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.290716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.290749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.290889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.290922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.291040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.291072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.291212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.291245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.291391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.291423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.291528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.291561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.291720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.291752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.291871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.291905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.292052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.292085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.292192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.292225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.292368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.292401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.292501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.292534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.292670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.292702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.292869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.292909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.293074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.293219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.293386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.293525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.293670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.293843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.293983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.294015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.294147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.294180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.294285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.294318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.294486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.294519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.294616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.294648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.294925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.294980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.295213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.295270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.295546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.295603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.295854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.295910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.296158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.296217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.296488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.296547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.296771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.296862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.297084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.297158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.297409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.297467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.297728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.297786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.298002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.298057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.298285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.298344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.298573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.298630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.298834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.298906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.299118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.299174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.299463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.299496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.299632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.299665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.299856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.299888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.300027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.300059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.300245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.300300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.300575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.300632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.833 [2024-07-15 10:41:07.300911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.833 [2024-07-15 10:41:07.300966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.833 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.301230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.301288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.301535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.301592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.301841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.301896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.302122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.302180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.302439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.302496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.302716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.302773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.302996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.303058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.303298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.303356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.303616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.303673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.303892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.303948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.304161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.304243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.304496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.304572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.304858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.304893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.305005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.305039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.305304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.305377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.305630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.305702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.305934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.305989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.306220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.306274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.306511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.306585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.306844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.306899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.307048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.307105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.307361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.307417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.307599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.307655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.307906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.307983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.308272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.308345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.308583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.308655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.308883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.308917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.309059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.309094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.309288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.309367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.309589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.309645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.309828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.309884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.310125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.310179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.310451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.310523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.310745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.310825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.311081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.311154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.311318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.311375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.311571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.311625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.311839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.311895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.312100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.312175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.312434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.312488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.312733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.312787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.313008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.313082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.313314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.313388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.313603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.313659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.313851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.313907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.314131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.314164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.314275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.314314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.314456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.314490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.314744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.314799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.315067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.315140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.315387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.315458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.315614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.315666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.315812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.315846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.316050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.316126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.316331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.316404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.316650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.316705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.316911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.316985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.317209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.317263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.317511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.317565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.317721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.317776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.318053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.318126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.834 [2024-07-15 10:41:07.318407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.834 [2024-07-15 10:41:07.318441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.834 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.318603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.318636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.318793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.318858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.319122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.319193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.319406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.319478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.319692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.319746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.320028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.320102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.320302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.320374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.320578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.320633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.320841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.320898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.321137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.321211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.321432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.321504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.321719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.321774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.322056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.322129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.322403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.322475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.322682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.322738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.323032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.323110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.323325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.323398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.323555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.323610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.323815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.323848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.324018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.324084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.324321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.324372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.324549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.324605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.324777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.324863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.325127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.325181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.325392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.325456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.325677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.325732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.326020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.326093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.326337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.326409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.326655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.326709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.326950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.327026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.327323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.327397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.327608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.327661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.327911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.327987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.328279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.328351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.328561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.328615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.328778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.328846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.329048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.329120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.329345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.329416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.329594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.329652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.329888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.329962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.330186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.330257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.330450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.330503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.330692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.330746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.330986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.331058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.331246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.331319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.331560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.331614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.331861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.331917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.332147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.332202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.332415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.332470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.332678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.332735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.332986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.333042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.333267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.333323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.333576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.333630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.333867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.333921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.334137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.835 [2024-07-15 10:41:07.334191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.835 qpair failed and we were unable to recover it. 00:24:18.835 [2024-07-15 10:41:07.334450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.334504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.334717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.334773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.335007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.335079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.335321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.335393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.335616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.335671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.335899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.335974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.336234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.336304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.336520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.336576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.336754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.336822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.337079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.337165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.337394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.337468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.337682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.337737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.337989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.338062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.338296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.338370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.338578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.338634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.338825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.338880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.339097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.339170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.339437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.339509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.339717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.339775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.339988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.340044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.340292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.340348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.340501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.340555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.340752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.340838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.341040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.341095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.341312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.341369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.341535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.341590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:18.836 [2024-07-15 10:41:07.341799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.836 [2024-07-15 10:41:07.341874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:18.836 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.342065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.342120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.342337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.342393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.342609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.342664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.342854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.342911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.343129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.343182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.343290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.343323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.343457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.343491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.343598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.343677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.343895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.343932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.344078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.344113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.344365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.344420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.344603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.344666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.344895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.344930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.345072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.345138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.345353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.345410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.345637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.345697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.345901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.345936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.346052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.346089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.346348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.346405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.346585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.346640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.346843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.346880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.347020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.347053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.347248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.347318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.347495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.347552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.347769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.347853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.347999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.348032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.348199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.348254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.348463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.348517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.348731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.348785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.349010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.349044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.349245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.349318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.349490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.349546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.349749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.349818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.350013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.350046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.350281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.350353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.350561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.350616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.350875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.350909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.351033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.351066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.351181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.351216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.351371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.351427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.351673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.351727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.351951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.351985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.352149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.352203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.352409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.352463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.352676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.352729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.352970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.353003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.353138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.353205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.353453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.353507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.353715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.353769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.353935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.353970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.354181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.354254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.354507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.354562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.354769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.354840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.354983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.355016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.355132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.355166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.355382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.355455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.355674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.355729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.355910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.113 [2024-07-15 10:41:07.355944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.113 qpair failed and we were unable to recover it. 00:24:19.113 [2024-07-15 10:41:07.356110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.356165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.356373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.356429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.356626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.356680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.356901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.356937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.357075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.357114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.357328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.357401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.357598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.357652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.357906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.357962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.358189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.358260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.358507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.358560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.358727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.358781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.359008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.359065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.359248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.359304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.359523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.359578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.359836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.359892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.360081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.360160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.360414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.360486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.360733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.360786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.361105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.361163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.361369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.361444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.361688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.361744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.361966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.362041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.362263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.362336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.362549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.362605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.362983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.363039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.363295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.363350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.363587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.363642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.363871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.363927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.364091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.364147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.364419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.364491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.364731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.364787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.364978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.365041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.365239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.365293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.365470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.365524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.365742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.365796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.366068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.366125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.366369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.366424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.366612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.366666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.366938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.367014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.367203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.367276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.367570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.367644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.367879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.367954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.368227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.368299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.368525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.368579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.368827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.368861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.368982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.369017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.369244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.369318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.369571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.369628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.369868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.369946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.370186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.370259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.370503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.370557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.370819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.370875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.371100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.371174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.371406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.371478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.371729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.371784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.372037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.372109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.372387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.372460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.372669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.372723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.372981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.373055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.373309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.373343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.373435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.373469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.114 [2024-07-15 10:41:07.373580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.114 [2024-07-15 10:41:07.373613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.114 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.373851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.373907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.374164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.374218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.374438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.374494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.374706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.374764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.374995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.375079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.375324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.375398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.375565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.375619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.375850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.375907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.376125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.376199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.376424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.376487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.376691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.376744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.377021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.377094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.377351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.377384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.377552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.377584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.377774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.377845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.378076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.378132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.378378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.378434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.378675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.378730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.378982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.379057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.379273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.379345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.379558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.379612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.379879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.379957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.380237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.380311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.380489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.380543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.380754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.380821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.381108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.381181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.381481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.381557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.381817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.381873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.382109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.382180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.382468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.382540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.382711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.382767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.383007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.383079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.383238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.383295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.383571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.383642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.383853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.383910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.384149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.384222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.384519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.384593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.384826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.384886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.385121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.385193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.385449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.385521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.385718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.385768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.385883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.385909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.386898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.386928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.387020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.115 [2024-07-15 10:41:07.387044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.115 qpair failed and we were unable to recover it. 00:24:19.115 [2024-07-15 10:41:07.387135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.387253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.387359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.387487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.387650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.387787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.387955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.387984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.388889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.388916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.389866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.389923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.390077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.390282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.390484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.390595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.390720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.390894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.390998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.391939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.391964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.392910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.392995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.393946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.393973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.394885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.394911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.395025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.395051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.395163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.395189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.395302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.395328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.116 [2024-07-15 10:41:07.395406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.116 [2024-07-15 10:41:07.395431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.116 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.395578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.395603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.395696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.395721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.395863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.395889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.395974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.395999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.396113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.396137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.396274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.396298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.396431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.396456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.396563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.396589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.396693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.396745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.396914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.396940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.397032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.397057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.397219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.397267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.397379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.397405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.397503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.397555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.397718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.397770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.397925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.397956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.398073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.398131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.398307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.398360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.398573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.398628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.398881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.398907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.399002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.399028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.399166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.399221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.399406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.399462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.399629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.399683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.399907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.399933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.400048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.400073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.400246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.400297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.400494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.400543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.400731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.400783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.400968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.400993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.401112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.401181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.401372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.401424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.401624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.401676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.401891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.401917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.402014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.402039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.402117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.402143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.402230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.402276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.402448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.402498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.402696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.402748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.402958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.402985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.403102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.403127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.403267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.403293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.403414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.403440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.403559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.403610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.403812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.403883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.403981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.404009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.404173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.404222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.404337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.404364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.404484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.404538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.404743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.404798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.404934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.404962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.405075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.405101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.405197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.405269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.405621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.405655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.405829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.405856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.405947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.405979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.406142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.406216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.406456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.406525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.406716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.406766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.406923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.406949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.407061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.407108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.407315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.407373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.407523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.407590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.407826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.407873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.407985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.408011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.408161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.408212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.408329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.408363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.408583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.408616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.117 [2024-07-15 10:41:07.408715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.117 [2024-07-15 10:41:07.408748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.117 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.408892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.408918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.409009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.409034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.409120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.409175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.409368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.409413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.409610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.409661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.409868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.409895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.409973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.409999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.410084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.410109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.410196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.410221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.410329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.410393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.410594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.410647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.410862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.410889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.410982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.411007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.411107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.411133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.411270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.411320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.411506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.411557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.411788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.411873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.411961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.411986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.412102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.412152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.412299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.412352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.412589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.412616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.412731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.412893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.412920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.413015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.413040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.413158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.413184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.413297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.413322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.413482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.413529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.413678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.413728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.413900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.413927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.414048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.414073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.414278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.414327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.414523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.414572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.414770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.414848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.414974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.414999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.415137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.415163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.415361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.415386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.415559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.415609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.415826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.415876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.415990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.416017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.416157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.416183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.416277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.416303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.416418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.416469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.416671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.416721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.416914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.416940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.417020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.417046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.417136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.417161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.417346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.417395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.417564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.417612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.417829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.417971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.417997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.418077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.418102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.418221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.418247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.418399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.418448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.418647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.418696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.418880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.418906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.419004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.419030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.419172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.419221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.419456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.419505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.118 [2024-07-15 10:41:07.419650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.118 [2024-07-15 10:41:07.419696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.118 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.419897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.419924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.420016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.420042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.420157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.420182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.420306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.420357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.420591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.420642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.420848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.420903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.421105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.421155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.421386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.421445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.421630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.421681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.421850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.421905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.422098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.422124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.422239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.422264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.422389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.422438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.422594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.422647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.422834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.422885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.423082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.423133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.423364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.423415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.423586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.423636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.423824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.423850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.424046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.424071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.424186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.424212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.424332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.424358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.424465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.424497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.424671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.424722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.424937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.424988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.425148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.425200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.425434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.425460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.425574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.425599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.425687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.425713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.425901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.425952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.426124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.426173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.426265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.426292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.426382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.426407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.426544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.426571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.426727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.426779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.427002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.427029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.427144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.427169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.427331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.427381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.427582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.427632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.427834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.428083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.428134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.428340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.428375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.428533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.428559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.428650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.428675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.428764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.428790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.428985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.429035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.429236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.429285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.429451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.429508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.429741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.429790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.430856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.430883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.431921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.431948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.432102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.432150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.432339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.432578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.432625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.119 [2024-07-15 10:41:07.432822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.119 [2024-07-15 10:41:07.432869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.119 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.433037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.433101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.433304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.433349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.433558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.433583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.433723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.433749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.433888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.433920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.434079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.434142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.434308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.434372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.434637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.434670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.434768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.434825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.434984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.435010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.435155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.435181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.435320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.435346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.435561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.435609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.435752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.435799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.436006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.436074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.436282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.436348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.436532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.436586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.436682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.436707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.436844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.436871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.437071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.437119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.437323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.437355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.437496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.437521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.437702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.437748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.437944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.437991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.438172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.438197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.438332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.438359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.438543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.438590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.438770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.438830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.439039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.439092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.439232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.439257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.439393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.439441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.439627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.439674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.439883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.439951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.440208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.440233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.440351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.440378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.440526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.440572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.440721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.440766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.440969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.441017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.441188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.441252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.441438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.441487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.441664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.441712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.441932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.441988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.442145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.442192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.442410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.442457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.442619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.442665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.442876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.442910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.443056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.443089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.443278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.443327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.443483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.443532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.443709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.443757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.443955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.444003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.444222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.444269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.444456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.444503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.444694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.444739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.444941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.445007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.445255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.445328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.445508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.445557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.445789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.445834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.445982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.446014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.446188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.446251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.446408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.446461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.446653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.446701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.446891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.446958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.447159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.447225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.447393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.447458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.447660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.447708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.447884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.447933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.120 qpair failed and we were unable to recover it. 00:24:19.120 [2024-07-15 10:41:07.448079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.120 [2024-07-15 10:41:07.448127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.448306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.448355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.448512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.448558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.448717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.448763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.448963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.449010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.449204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.449252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.449476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.449524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.449724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.449772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.450007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.450073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.450331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.450395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.450618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.450665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.450852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.450903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.451104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.451169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.451396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.451448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.451609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.451656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.451898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.451964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.452112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.452162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.452336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.452403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.452588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.452637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.452822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.452870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.453051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.453098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.453318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.453364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.453551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.453597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.453792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.453850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.454015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.454061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.454263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.454312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.454440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.454487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.454708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.454756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.454965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.455014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.455246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.455295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.455481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.455530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.455678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.455727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.455985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.456051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.456259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.456333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.456556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.456605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.456757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.456818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.457008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.457056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.457237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.457284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.457472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.457521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.457703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.457750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.457986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.458035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.458219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.458267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.458453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.458499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.458675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.458721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.458970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.459019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.459214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.459285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.459481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.459527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.459730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.459776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.459984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.460017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.460120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.460152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.460367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.460435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.460617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.460665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.460851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.460901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.461184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.461251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.461466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.461514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.461699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.461746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.461998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.462070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.462237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.462307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.462499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.462548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.121 qpair failed and we were unable to recover it. 00:24:19.121 [2024-07-15 10:41:07.462701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.121 [2024-07-15 10:41:07.462750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.463043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.463114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.463335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.463399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.463543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.463780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.463865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.464056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.464124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.464354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.464401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.464614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.464661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.464842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.464892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.465069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.465140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.465333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.465381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.465579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.465627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.465843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.465892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.466116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.466163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.466319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.466375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.466564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.466612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.466812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.466862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.467114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.467180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.467388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.467435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.467656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.467705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.467920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.467989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.468212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.468277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.468439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.468486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.468681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.468716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.468866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.468899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.469098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.469173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.469373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.469439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.469656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.469705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.469880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.469927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.470122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.470168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.470372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.470657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.470704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.470921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.470987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.471191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.471257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.471413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.471460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.471679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.471726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.471983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.472050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.472292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.472357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.472514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.472562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.472742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.472788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.472953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.472989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.473135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.473169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.473319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.473352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.473467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.473501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.473720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.473769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.474037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.474088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.474350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.474384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.474495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.474714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.474763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.474959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.475034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.475242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.475276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.475393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.475425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.475571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.475619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.475933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.476000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.476239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.476304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.476495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.476544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.476721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.476768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.476975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.477023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.477260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.477293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.477405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.477439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.477582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.477628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.477821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.477869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.478074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.478141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.478355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.478417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.478602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.478650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.478833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.478880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.479094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.479160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.479316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.479363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.479543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.122 [2024-07-15 10:41:07.479591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.122 qpair failed and we were unable to recover it. 00:24:19.122 [2024-07-15 10:41:07.479824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.479874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.480035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.480084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.480279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.480326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.480513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.480560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.480741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.480786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.481020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.481087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.481296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.481366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.481585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.481633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.481858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.481907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.482130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.482196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.482438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.482502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.482678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.482726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.482947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.483206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.483272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.483456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.483521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.483707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.483753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.483981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.484046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.484237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.484306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.484521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.484568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.484761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.484820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.485052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.485118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.485325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.485390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.485644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.485691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.485872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.485942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.486151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.486216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.486434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.486499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.486695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.486742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.486996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.487063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.487215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.487265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.487401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.487433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.487564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.487609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.487797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.487870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.488091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.488137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.488283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.488330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.488495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.488551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.488769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.488830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.488985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.489033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.489191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.489239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.489452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.489500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.489728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.489775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.489968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.490041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.490238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.490286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.490473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.490551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.490745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.490792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.491006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.491075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.491323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.491389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.491545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.491593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.491734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.491782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.492029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.492096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.492249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.492299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.492490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.492538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.492723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.492771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.492980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.493034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.493246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.493315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.493542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.493593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.493754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.493794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.493948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.493987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.494154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.494193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.494358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.494397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.494545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.123 [2024-07-15 10:41:07.494584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.123 qpair failed and we were unable to recover it. 00:24:19.123 [2024-07-15 10:41:07.494759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.494819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.495008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.495053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.495270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.495315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.495498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.495545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.495700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.495747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.495994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.496041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.496203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.496251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.496441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.496483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.496626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.496665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.496885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.496956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.497175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.497208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.497402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.497450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.497638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.497688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.497894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.497962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.498157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.498204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.498401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.498449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.498593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.498640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.498869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.498956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.499145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.499193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.499356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.499405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.499621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.499668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.499872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.499920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.500084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.500125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.500335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.500381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.500562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.500609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.500757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.500816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.501014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.501060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.501283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.501324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.501544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.501592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.501751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.501799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.501989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.502056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.502264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.502330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.502514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.502568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.502723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.502770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.503018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.503059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.503215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.503283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.503427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.503475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.503630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.503676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.503868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.503936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.504111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.504175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.504360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.504409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.504592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.504640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.504830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.504880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.505101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.505148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.505282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.505329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.505517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.505556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.505684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.505723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.505962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.506002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.506162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.506201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.506322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.506363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.506534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.506594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.506817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.506865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.507091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.507159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.507442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.507508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.507715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.507755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.507911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.507970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.508184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.508250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.508445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.508486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.508603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.508642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.508819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.508859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.509134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.509174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.509300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.509338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.509525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.509572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.509756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.509842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.510065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.510137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.510349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.510416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.510641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.510690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.510926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.124 [2024-07-15 10:41:07.510997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.124 qpair failed and we were unable to recover it. 00:24:19.124 [2024-07-15 10:41:07.511223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.511264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.511402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.511442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.511595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.511646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.511855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.511903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.512097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.512173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.512391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.512438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.512671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.512989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.513055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.513224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.513290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.513477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.513527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.513705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.513752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.514013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.514080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.514297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.514364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.514500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.514547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.514770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.514831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.515018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.515086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.515296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.515364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.515585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.515632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.515836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.515885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.516134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.516202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.516350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.516398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.516591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.516638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.516835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.516883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.517095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.517159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.517381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.517421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.517630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.517678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.517824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.517871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.518055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.518127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.518346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.518411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.518630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.518669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.518882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.518951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.519148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.519405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.519453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.519603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.519649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.519871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.519918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.520108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.520157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.520385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.520431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.520572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.520618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.520820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.520859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.521003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.521043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.521252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.521299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.521522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.521570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.521722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.521768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.521960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.522028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.522257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.522315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.522493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.522540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.522724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.522772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.522969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.523018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.523228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.523275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.523468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.523514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.523701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.523748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.523959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.524025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.524239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.524304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.524540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.524579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.524735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.524795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.524983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.525051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.525320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.525390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.525605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.525653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.525889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.525956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.526181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.526248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.526448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.526512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.526722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.526769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.526981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.527046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.527263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.527328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.527554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.527603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.527794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.527870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.528137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.528203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.528419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.528486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.125 qpair failed and we were unable to recover it. 00:24:19.125 [2024-07-15 10:41:07.528689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.125 [2024-07-15 10:41:07.528737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.528966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.529016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.529272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.529338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.529551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.529600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.529793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.529856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.530079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.530127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.530375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.530424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.530649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.530696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.530896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.530967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.531147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.531211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.531421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.531487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.531673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.531721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.531938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.532008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.532262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.532329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.532573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.532621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.532815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.533019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.533073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.533293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.533340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.533558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.533606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.533778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.533838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.534027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.534073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.534261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.534307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.534476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.534522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.534698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.534745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.534967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.535035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.535252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.535320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.535548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.535599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.535850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.535899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.536080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.536147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.536384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.536452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.536673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.536721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.536881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.537100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.537148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.537322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.537369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.537558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.537606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.537835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.537884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.538037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.538088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.538354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.538422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.538568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.538616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.538838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.538887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.539102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.539172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.539450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.539515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.539706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.539753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.540002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.540069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.540245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.540323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.540548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.540596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.540828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.540878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.541073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.541149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.541336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.541404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.541595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.541642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.541827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.541875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.542052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.542119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.542346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.542413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.542635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.542682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.542956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.543024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.543259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.543324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.543506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.543561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.543756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.543832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.544069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.544142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.544394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.544460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.126 [2024-07-15 10:41:07.544613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.126 [2024-07-15 10:41:07.544663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.126 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.544844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.544895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.545125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.545191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.545404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.545452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.545641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.545689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.545942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.546009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.546260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.546327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.546488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.546535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.546752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.546812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.547009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.547077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.547309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.547376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.547584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.547631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.547787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.547859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.548073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.548153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.548376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.548443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.548588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.548635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.548831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.548881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.549074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.549147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.549376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.549424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.549585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.549634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.549821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.549869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.550107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.550172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.550394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.550442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.550596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.550643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.550843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.550891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.551065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.551132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.551310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.551357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.551550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.551597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.551822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.551872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.552104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.552174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.552348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.552415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.552581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.552629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.552833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.552882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.553109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.553173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.553346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.553414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.553605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.553653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.553851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.553914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.554082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.554128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.554345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.554392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.554581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.554627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.554818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.554866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.555045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.555091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.555251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.555298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.555477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.555525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.555749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.555797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.555991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.556039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.556223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.556271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.556476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.556523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.556709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.556756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.556926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.556977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.557177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.557226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.557474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.557524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.557711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.557758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.557951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.558000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.558165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.558212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.558431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.558478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.558666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.558722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.558991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.559061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.559282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.559330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.559553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.559601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.559788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.559860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.560021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.560070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.560267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.560315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.560522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.560772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.560836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.561053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.561120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.561276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.561347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.561536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.561584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.561775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.127 [2024-07-15 10:41:07.561836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.127 qpair failed and we were unable to recover it. 00:24:19.127 [2024-07-15 10:41:07.562105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.562171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.562388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.562454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.562649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.562697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.562931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.563000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.563214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.563281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.563472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.563520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.563714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.563763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.564005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.564063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.564286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.564351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.564535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.564585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.564776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.564836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.565011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.565061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.565218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.565266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.565452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.565499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.565718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.565765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.565965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.566015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.566203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.566250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.566442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.566490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.566668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.566715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.566881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.566931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.567185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.567253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.567452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.567500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.567658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.567708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.567967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.568033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.568256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.568320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.568518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.568565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.568725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.568775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.569042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.569107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.569375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.569441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.569589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.569638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.569811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.569861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.570048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.570114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.570326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.570391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.570585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.570632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.570905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.570975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.571145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.571216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.571433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.571481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.571623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.571670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.571819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.571867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.572106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.572154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.572326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.572375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.572572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.572619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.572847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.572896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.573083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.573152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.573307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.573354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.573545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.573591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.573776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.573834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.574050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.574124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.574385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.574451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.574641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.574689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.574846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.574894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.575079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.575148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.575339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.575387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.575578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.575625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.575838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.575888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.576099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.576177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.576319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.576366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.576557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.576603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.576797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.576854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.577017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.577088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.577247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.577313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.577502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.577548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.577744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.128 [2024-07-15 10:41:07.577792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.128 qpair failed and we were unable to recover it. 00:24:19.128 [2024-07-15 10:41:07.577984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.578050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.578266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.578333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.578487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.578534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.578738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.578786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.579010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.579078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.579241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.579288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.579474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.579521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.579661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.579707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.579914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.579980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.580144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.580191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.580338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.580384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.580593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.580642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.580787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.580848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.580996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.581044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.581237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.581285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.581472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.581520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.581699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.581746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.581914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.581961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.582181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.582229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.582389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.582435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.582603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.582651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.586970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.587045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.587311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.587379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.587656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.587723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.587913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.587972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.588181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.588248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.588461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.588531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.588750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.588798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.589026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.589093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.589374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.589440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.589594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.589645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.589835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.589887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.590116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.590164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.590378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.590443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.590646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.590695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.590915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.590981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.591196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.591264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.591477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.591526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.591731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.591779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.592043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.592111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.592346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.592593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.592772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.592834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.593069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.593141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.593336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.593403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.593584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.593631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.593825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.593873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.594113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.594180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.594452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.594521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.594714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.594764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.595019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.595087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.595313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.595381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.595610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.595658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.595929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.595997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.596175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.596242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.596413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.596486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.596686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.596733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.596933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.597011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.597228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.597297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.597477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.597525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.597719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.597766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.597988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.598059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.598373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.598447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.598667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.598714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.129 qpair failed and we were unable to recover it. 00:24:19.129 [2024-07-15 10:41:07.598904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.129 [2024-07-15 10:41:07.598982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.599214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.599282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.599432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.599482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.599669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.599717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.599865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.599915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.600125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.600199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.600347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.600395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.600618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.600665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.600831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.600882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.601086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.601154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.601370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.601437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.601584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.601632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.601823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.601872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.602130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.602197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.602449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.602517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.602660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.602710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.602981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.603050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.603305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.603372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.603568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.603616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.603868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.603919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.604119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.604187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.604380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.604428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.604647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.604694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.604922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.604992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.605169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.605243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.605422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.605469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.605636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.605685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.605917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.605987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.606222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.606272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.606476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.606524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.606689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.606737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.606986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.607056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.607261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.607329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.607545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.607594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.607822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.607871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.608022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.608073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.608304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.608371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.608584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.608632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.608823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.608872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.609059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.609106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.609311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.609360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.609527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.609577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.609769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.609830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.610020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.610068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.610222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.610272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.610496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.610564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.610763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.610826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.611055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.611104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.611278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.611327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.611581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.611629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.611846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.611896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.612156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.612205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.612393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.612440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.612628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.612677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.612908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.612978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.613216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.613284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.613480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.613547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.613768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.613833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.614054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.614135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.614402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.614469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.614654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.614701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.614919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.614988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.615179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.615248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.615505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.615573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.615794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.615853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.616116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.616182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.616354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.616427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.616619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.616674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.130 [2024-07-15 10:41:07.616889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.130 [2024-07-15 10:41:07.616957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.130 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.617226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.617293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.617564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.617630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.617853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.617901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.618085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.618135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.618354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.618403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.618578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.618626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.618821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.618870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.619047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.619096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.619285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.619331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.619489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.619536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.619776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.619845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.620038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.620086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.620296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.620367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.620593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.620641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.620815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.620865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.621061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.621110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.621323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.621370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.621562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.621609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.621792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.621856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.622086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.622134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.622346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.622394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.622578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.622624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.622819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.622868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.623108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.623176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.623437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.623502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.623711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.623759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.624013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.624082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.624345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.624412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.624604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.624651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.624822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.624871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.625077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.625127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.625344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.625413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.625595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.625644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.625834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.625883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.626106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.626173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.626444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.626511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.626677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.626726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.626927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.626995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.627217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.627291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.627509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.627557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.627732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.627780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.628056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.628130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.628342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.628410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.628582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.628630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.628782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.628840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.629032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.629100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.629368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.629435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.629608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.629655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.629878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.629928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.630117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.630168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.630359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.630408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.630573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.630621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.630827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.630876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.631104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.631172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.631364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.631412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.631603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.631650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.631851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.631901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.632138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.131 [2024-07-15 10:41:07.632204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.131 qpair failed and we were unable to recover it. 00:24:19.131 [2024-07-15 10:41:07.632427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.632475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.632667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.632715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.632935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.632983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.633172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.633220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.633419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.633466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.633611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.633659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.633881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.633954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.634185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.634232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.634387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.634435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.634655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.634702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.634865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.634914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.635109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.635157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.635336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.635416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.635560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.635607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.635797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.635856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.636028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.636103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.636339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.636408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.636624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.636672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.636889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.636958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.637223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.637291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.637478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.637532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.637761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.637834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.638052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.638119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.638337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.638406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.638589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.638637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.638798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.638860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.639038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.639113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.639379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.639447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.639687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.639736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.640004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.640074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.640345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.640413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.640580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.640631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.640831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.640882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.641142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.641193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.641391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.641439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.641628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.641676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.641939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.642009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.642242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.642311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.642467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.642516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.642699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.642749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.642979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.643048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.643238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.643288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.643447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.643496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.643720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.643768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.644012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.644063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.644287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.132 [2024-07-15 10:41:07.644353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.132 qpair failed and we were unable to recover it. 00:24:19.132 [2024-07-15 10:41:07.644582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.644629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.644877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.644951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.645222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.645468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.645516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.645712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.645759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.646007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.646077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.646319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.646392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.646584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.646633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.646799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.646864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.647049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.647135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.647367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.647442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.647590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.647640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.647836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.647886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.648110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.648167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.648340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.648397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.648623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.648673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.648990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.649041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.649323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.649393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.649593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.649643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.649824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.649873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.650098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.650165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.650405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.650472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.650626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.650673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.650873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.650923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.651107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.651156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.651365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.651432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.651626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.651674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.651890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.651964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.652159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.652233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.652416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.652464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.652659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.652707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.652883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.652959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.653174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.653245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.653463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.653511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.653730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.653778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.654026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.654093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.654327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.654376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.654598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.654646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.415 [2024-07-15 10:41:07.654843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.415 [2024-07-15 10:41:07.654892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.415 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.655116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.655183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.655407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.655475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.655708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.655755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.656062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.656141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.656357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.656425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.656643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.656691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.656895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.656967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.657181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.657248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.657501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.657551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.657782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.657845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.658110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.658179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.658446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.658512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.658739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.658787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.659020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.659089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.659333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.659397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.659666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.659722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.659978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.660045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.660257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.660322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.660540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.660606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.660793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.660857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.660999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.661049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.661311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.661379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.661649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.661715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.661953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.662024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.662284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.662354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.662544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.662592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.662767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.662833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.663005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.663076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.663274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.663324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.663533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.663582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.663833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.663883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.664044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.664094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.664258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.664308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.664547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.664595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.664769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.664834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.665029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.665096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.665296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.665345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.665541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.665769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.665831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.666049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.666128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.666334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.666401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.666576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.666624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.666893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.666965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.667206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.667275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.667497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.667545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.667764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.667824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.668013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.668083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.668275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.668324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.668485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.668533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.668758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.668816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.669046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.669095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.669372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.669441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.669596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.669644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.669814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.669866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.670059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.670109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.670308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.670386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.670585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.670634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.670851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.670903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.671125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.671190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.671408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.671476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.671710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.671758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.672047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.416 [2024-07-15 10:41:07.672125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.416 qpair failed and we were unable to recover it. 00:24:19.416 [2024-07-15 10:41:07.672387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.672454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.672645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.672695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.672905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.672974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.673228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.673295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.673516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.673582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.673769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.673829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.674059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.674126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.674343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.674408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.674633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.674681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.674895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.674965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.675166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.675235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.675474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.675540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.675727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.675775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.676007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.676075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.676347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.676413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.676640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.676688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.676927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.676997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.677194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.677263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.677481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.677550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.677778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.677848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.678077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.678155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.678397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.678462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.678633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.678682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.678892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.678964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.679233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.679301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.679564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.679630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.679786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.679851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.680127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.680195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.680408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.680478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.680669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.680719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.680998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.681066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.681297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.681364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.681586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.681636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.681831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.681888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.682158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.682225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.682498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.682565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.682715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.683038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.683088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.683314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.683381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.683611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.683660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.683873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.683949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.684167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.684243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.684485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.684535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.684725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.684774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.685057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.685106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.685381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.685451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.685643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.685691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.685951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.686020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.686191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.686265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.686462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.686510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.686723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.686771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.687048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.687119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.687401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.687470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.687687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.687735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.687978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.688048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.688267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.688334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.688565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.688634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.688887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.688956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.689192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.689258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.417 [2024-07-15 10:41:07.689475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.417 [2024-07-15 10:41:07.689524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.417 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.689728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.689777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.690029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.690098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.690287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.690357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.690587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.690637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.690863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.690914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.691103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.691184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.691347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.691398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.691599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.691648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.691845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.691895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.692057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.692115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.692301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.692351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.692530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.692578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.692755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.692823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.693054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.693119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.693337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.693385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.693576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.693624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.693819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.693869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.694094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.694162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.694305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.694355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.694577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.694625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.694851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.694928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.695145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.695212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.695427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.695475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.695675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.695724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.696000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.696069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.696296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.696364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.696554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.696602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.696819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.696870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.697100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.697168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.697430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.697497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.697654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.697704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.697959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.698010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.698202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.698270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.698451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.698500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.698684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.698734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.698920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.698970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.699163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.699212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.699355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.699406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.699632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.699681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.699905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.699956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.700160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.700210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.700398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.700448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.700683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.700732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.700899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.700950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.701166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.701262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.701498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.701548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.701727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.701776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.701989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.702058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.702296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.702364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.702580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.702629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.702821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.702870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.703038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.418 [2024-07-15 10:41:07.703115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.418 qpair failed and we were unable to recover it. 00:24:19.418 [2024-07-15 10:41:07.703351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.703417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.703638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.703693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.703865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.703918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.704153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.704223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.704440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.704490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.704635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.704690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.704919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.704989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.705251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.705320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.705530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.705579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.705816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.705865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.706075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.706151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.706311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.706363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.706555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.706612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.706837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.706908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.707089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.707163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.707358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.707429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.707626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.707675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.707889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.707965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.708247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.708323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.708508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.708556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.708696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.708746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.708938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.708973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.709090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.709127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.709234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.709269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.709404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.709438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.709568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.709602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.709741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.709823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.709976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.710012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.710188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.710238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.710389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.710426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.710574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.710609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.710757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.710812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.710942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.710975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.711110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.711143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.711276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.711310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.711455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.711489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.711625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.711657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.711758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.711812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.711956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.711990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.712127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.712160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.712263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.712295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.712456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.712505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.712614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.712647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.712782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.712832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.712993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.713026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.713172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.713204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.713335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.713367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.713532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.713566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.713704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.713738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.713920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.713953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.714057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.714089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.714201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.714233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.714378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.714410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.714568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.714601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.714734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.714766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.714925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.714971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.715115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.715146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.715277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.715308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.715438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.715469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.715598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.715627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.715761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.715808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.715908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.715938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.716069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.716110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.716236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.419 [2024-07-15 10:41:07.716266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.419 qpair failed and we were unable to recover it. 00:24:19.419 [2024-07-15 10:41:07.716394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.716423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.716553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.716583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.716689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.716720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.716860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.716894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.716994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.717030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.717177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.717208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.717369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.717401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.717499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.717530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.717631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.717663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.717825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.717872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.717970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.718120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.718283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.718413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.718594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.718712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.718884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.719083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.719253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.719435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.719584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.719709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.719846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.719995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.720158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.720289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.720444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.720598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.720765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.720947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.720978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.721139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.721178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.721307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.721344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.721440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.721470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.721616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.721645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.721769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.721812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.721965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.721993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.722084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.722117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.722272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.722299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.722445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.722473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.722621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.722650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.722810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.722841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.722971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.723118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.723296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.723419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.723569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.723755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.723893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.723922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.724954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.724984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.725076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.725116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.725274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.725303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.725456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.725485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.725594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.725624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.725743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.725772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.725902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.725931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.420 qpair failed and we were unable to recover it. 00:24:19.420 [2024-07-15 10:41:07.726029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.420 [2024-07-15 10:41:07.726058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.726190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.726218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.726329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.726358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.726474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.726502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.726638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.726667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.726756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.726784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.726936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.726977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.727113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.727142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.727258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.727286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.727438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.727465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.727579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.727611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.727730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.727756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.727902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.727932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.728967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.728993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.729919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.729947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.730945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.730972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.731836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.731881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.732874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.732902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.733929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.733956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.734066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.734104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.734195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.734221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.734333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.734360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.734451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.421 [2024-07-15 10:41:07.734478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.421 qpair failed and we were unable to recover it. 00:24:19.421 [2024-07-15 10:41:07.734589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.734615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.734703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.734729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.734917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.734945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.735951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.735977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.736085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.736111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.736300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.736326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.736461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.736488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.736604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.736630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.736788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.736834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.736926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.736955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.737910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.737936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.738861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.738889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.739962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.739988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.740880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.740909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.741963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.741989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.742952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.742981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.743102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.743129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.422 qpair failed and we were unable to recover it. 00:24:19.422 [2024-07-15 10:41:07.743218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.422 [2024-07-15 10:41:07.743245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.743338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.743365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.743521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.743548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.743666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.743694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.743772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.743812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.743906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.743932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.744942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.744968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.745941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.745967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.746912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.746939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.747017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.747044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.747181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.747229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.747383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.747433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.747630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.747679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.747822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.747882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.748019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.748045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.748287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.748355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.748544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.748595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.748789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.748858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.748956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.748985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.749072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.749143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.749414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.749480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.749707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.749769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.749976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.750004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.750142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.750206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.750460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.750524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.750826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.750876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.750988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.751015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.751191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.751257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.751543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.751569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.751816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.751861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.751975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.752002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.752113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.752151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.752410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.752482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.752736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.752819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.752949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.752975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.753067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.753092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.753207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.753233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.423 qpair failed and we were unable to recover it. 00:24:19.423 [2024-07-15 10:41:07.753444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.423 [2024-07-15 10:41:07.753507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.753821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.753885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.753973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.753999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.754159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.754223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.754529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.754600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.754836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.754890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.755008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.755034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.755119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.755144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.755368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.755428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.755611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.755672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.755910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.755936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.756024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.756050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.756190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.756272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.756510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.756573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.756843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.756894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.756970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.756995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.757134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.757194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.757467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.757529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.757772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.757869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.757987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.758017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.758155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.758232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.758530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.758615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.758824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.758877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.758985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.759011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.759199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.759267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.759600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.759663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.759913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.759940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.760052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.760079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.760290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.760353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.760538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.760609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.760791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.760824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.760941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.760966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.761081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.761247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.761383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.761522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.761670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.761877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.761973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.762012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.762105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.762132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.762247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.762273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.762474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.762538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.762771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.762828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.762992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.763902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.763928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.764038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.764063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.764262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.764324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.764509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.764580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.764830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.764891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.764974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.764999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.765103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.765166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.765413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.765476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.765666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.765729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.765955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.765980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.766127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.766203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.766508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.766571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.766786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.766878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.766993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.767019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.424 [2024-07-15 10:41:07.767135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.424 [2024-07-15 10:41:07.767161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.424 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.767318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.767380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.767631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.767694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.767897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.767925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.768059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.768101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.768386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.768449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.768734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.768796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.768964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.768990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.769180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.769219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.769337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.769365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.769481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.769546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.769788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.769883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.770002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.770029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.770193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.770258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.770547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.770614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.770890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.770918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.771003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.771030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.771150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.771177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.771331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.771395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.771647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.771713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.771904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.771930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.772047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.772073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.772364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.772439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.772669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.772732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.772923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.772951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.773051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.773114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.773368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.773432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.773707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.773773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.773910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.773936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.774026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.774052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.774234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.774299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.774547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.774611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.774894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.774922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.775005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.775033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.775224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.775291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.775538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.775603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.775856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.775884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.776003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.776030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.776264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.776361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.776647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.776717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.776921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.776948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.777047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.777073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.777164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.777216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.777468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.777530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.777736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.777818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.777935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.777963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.778078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.778119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.778329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.778357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.778520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.778547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.778761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.778854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.778940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.778967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.779958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.779984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.780106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.780132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.780249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.780278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.780475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.780539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.780741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.780817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.780931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.780957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 [2024-07-15 10:41:07.781055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.425 [2024-07-15 10:41:07.781107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.425 qpair failed and we were unable to recover it. 00:24:19.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1304301 Killed "${NVMF_APP[@]}" "$@" 00:24:19.426 [2024-07-15 10:41:07.781253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.781296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.781453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.781489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:19.426 [2024-07-15 10:41:07.781587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.781622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:19.426 [2024-07-15 10:41:07.781761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.781796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.426 [2024-07-15 10:41:07.781939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.781965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.426 [2024-07-15 10:41:07.782104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.426 [2024-07-15 10:41:07.782149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.782363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.782425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.782692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.782755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.782942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.782969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.783078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.783131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.783324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.783387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.783665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.783731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.783875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.783920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.784014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.784283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.784317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.784459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.784630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.784904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.784931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.785021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.785046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.785251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.785290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.785388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.785415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.785586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.785651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.785874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.785905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.785985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.786012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.786107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.786133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.786240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.786271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1304862 00:24:19.426 [2024-07-15 10:41:07.786364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1304862 00:24:19.426 [2024-07-15 10:41:07.786433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1304862 ']' 00:24:19.426 [2024-07-15 10:41:07.786646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.426 [2024-07-15 10:41:07.786709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.426 [2024-07-15 10:41:07.786909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.786936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.426 [2024-07-15 10:41:07.787027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.787054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.426 10:41:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.426 [2024-07-15 10:41:07.787205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.787242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.787416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.787481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.787685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.787749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.787917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.787943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.788036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.788066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.788204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.788234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.788415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.788467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.788682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.788744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.788916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.788942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.789047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.789074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.789225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.789255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.789402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.789458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.789582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.789617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.789768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.789821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.789948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.789974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.790058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.790083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.790203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.790235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.790377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.790411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.790608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.790661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.790817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.790870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.790967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.790994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.791109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.426 [2024-07-15 10:41:07.791141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.426 qpair failed and we were unable to recover it. 00:24:19.426 [2024-07-15 10:41:07.791277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.791308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.791497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.791523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.791661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.791694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.791823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.791864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.791966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.792006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.792172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.792222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.792408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.792458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.792613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.792646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.792742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.792779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.792891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.792923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.793918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.793944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.794027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.794052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.794191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.794222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.794383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.794426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.794553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.794584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.794709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.794758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.794917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.794946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.795047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.795074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.795204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.795239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.795387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.795421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.795559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.795625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.795746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.795814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.795933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.795959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.796066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.796217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.796353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.796526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.796689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.796843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.796991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.797040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.797193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.797247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.797353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.797398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.797538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.797569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.797676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.797709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.797821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.797853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.798932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.798959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.799049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.799094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.799239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.799271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.799413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.799445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.799575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.799606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.799771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.799826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.799921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.799949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.800048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.800077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.800213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.800245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.800381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.800414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.800519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.800560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.800683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.800716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.800846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.800891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.427 [2024-07-15 10:41:07.801911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.427 [2024-07-15 10:41:07.801938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.427 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.802889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.802921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.803092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.803199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.803313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.803502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.803660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.803831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.803981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.804169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.804333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.804466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.804622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.804778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.804944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.804975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.805966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.805997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.806155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.806295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.806447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.806586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.806747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.806900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.806989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.807134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.807292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.807423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.807567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.807750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.807927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.807957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.808052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.808080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.808196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.808225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.808389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.808419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.808521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.808550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.808772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.808817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.808961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.808992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.809086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.809129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.809237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.809273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.809381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.809407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.809497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.809523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.809725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.809752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.809899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.809930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.810886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.810920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.811957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.428 [2024-07-15 10:41:07.811989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.428 qpair failed and we were unable to recover it. 00:24:19.428 [2024-07-15 10:41:07.812094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.812240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.812367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.812507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.812631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.812772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.812900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.812928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.813046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.813159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.813323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.813455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.813620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.813829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.813977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.814899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.814926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.815937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.815966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.816106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.816136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.816227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.816252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.816360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.816402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.816487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.816514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.816658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.816686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.816897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.816928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.817024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.817052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.817198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.817227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.817322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.817350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.817467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.817501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.817708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.817737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.817888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.817927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.818902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.818995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.819144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.819252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.819367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.819520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.819638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.819788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.819822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.820039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.820066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.820185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.820222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.820314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.820341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.429 qpair failed and we were unable to recover it. 00:24:19.429 [2024-07-15 10:41:07.820430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.429 [2024-07-15 10:41:07.820457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.820573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.820603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.820686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.820730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.820864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.820891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.820974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.821952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.821980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.822954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.822980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.823125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.823292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.823451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.823593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.823769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.823906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.823988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.824910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.824936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.825955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.825981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.826089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.826115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.826196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.826228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.826307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.826332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.826453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.826686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.826714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.826865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.826891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.827909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.827947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.828963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.828992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.829883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.829976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.830003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.830118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.830144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.430 [2024-07-15 10:41:07.830234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.430 [2024-07-15 10:41:07.830259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.430 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.830372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.830397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.830544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.830572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.830691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.830733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.830937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.830965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.831900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.831927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.832966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.832993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 [2024-07-15 10:41:07.833540] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833627] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.431 [2024-07-15 10:41:07.833657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.833888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.833912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.834880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.834907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.835001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.835028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.835119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.835157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.835247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.835274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.835396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.835423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.431 qpair failed and we were unable to recover it. 00:24:19.431 [2024-07-15 10:41:07.835535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.431 [2024-07-15 10:41:07.835565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.835723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.835751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.835847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.835877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.836948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.836975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.837168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.837195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.837310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.837336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.837453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.837480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.837596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.837622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.837744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.837770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.837890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.837917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.838898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.838925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.839885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.839973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.840000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.840091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.840117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.840198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.840224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.840311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.840338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.840431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.840457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.432 [2024-07-15 10:41:07.840577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.432 [2024-07-15 10:41:07.840604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.432 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.840717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.840745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.840827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.840858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.840946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.840972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.841918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.841945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.842928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.842954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.843060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.843203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.843316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.843472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.843613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.843775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.843974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.844898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.844925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.845938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.845964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.846954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.433 [2024-07-15 10:41:07.846981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.433 qpair failed and we were unable to recover it. 00:24:19.433 [2024-07-15 10:41:07.847094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.847886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.847976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.848904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.848931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.849904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.849930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.850937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.850963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.851968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.851995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.852969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.852997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.853110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.853137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.853214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.434 [2024-07-15 10:41:07.853241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.434 qpair failed and we were unable to recover it. 00:24:19.434 [2024-07-15 10:41:07.853330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.853357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.853493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.853518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.853606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.853642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.853783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.853814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.853923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.853948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.854890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.854974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.855129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.855270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.855387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.855514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.855684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.855872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.855900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.856956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.856983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.857890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.857985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.858909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.858934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.859027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.859052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.859160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.859186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.859263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.859288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.859412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.859438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.435 [2024-07-15 10:41:07.859527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.435 [2024-07-15 10:41:07.859564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.435 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.859676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.859702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.859818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.859848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.859956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.859995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.860923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.860949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.861941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.861969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.862080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.862205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.862350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.862491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.862616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.862850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.862977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.863084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.863225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.863441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.863608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.863722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.863885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.863913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.864063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.864182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.864296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.864433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.864550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.436 [2024-07-15 10:41:07.864722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.436 qpair failed and we were unable to recover it. 00:24:19.436 [2024-07-15 10:41:07.864815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.864841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.864959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.864984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.865911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.865939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.866935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.866962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.867962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.867987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.868955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.868981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.869958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.869985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.870930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.870956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.871045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.871071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.437 qpair failed and we were unable to recover it. 00:24:19.437 [2024-07-15 10:41:07.871167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.437 [2024-07-15 10:41:07.871194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.871283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.871310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.438 [2024-07-15 10:41:07.871420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.871447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.871532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.871558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.871671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.871697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.871820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.871848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.871958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.871985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.872103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.872247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.872366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.872498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.872634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.872820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.872973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.873926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.873955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.874955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.874981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.875891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.875993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.876134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.876369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.876520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.876683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.876795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.876911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.876936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.877026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.877052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.877166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.877191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.877297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.877322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.877410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.877435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.877524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.438 [2024-07-15 10:41:07.877549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.438 qpair failed and we were unable to recover it. 00:24:19.438 [2024-07-15 10:41:07.877636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.877664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.877758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.877786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.877926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.877964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.878946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.878971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.879927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.879953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.880934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.880960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.881863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.881896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.882936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.882966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.883083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.883110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.883204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.883230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.883314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.883341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.883449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.883476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.883574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.883603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.439 qpair failed and we were unable to recover it. 00:24:19.439 [2024-07-15 10:41:07.883724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.439 [2024-07-15 10:41:07.883751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.883851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.883880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.883998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.884144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.884282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.884411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.884519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.884743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.884905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.884934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.885836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.885876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.886888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.886915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.887968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.887993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.888876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.888903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.889046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.889072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.889189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.889215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.889338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.889365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.889462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.889489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.889624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.440 [2024-07-15 10:41:07.889651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.440 qpair failed and we were unable to recover it. 00:24:19.440 [2024-07-15 10:41:07.889745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.889772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.889907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.889935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.890945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.890972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.891882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.891988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.892867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.892894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.893933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.893960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.894963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.894989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.895845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.895872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.441 [2024-07-15 10:41:07.896010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.441 [2024-07-15 10:41:07.896036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.441 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.896157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.896294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.896436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.896580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.896721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.896855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.896983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.897891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.897917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.898894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.898921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.899959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.899990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.900959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.900985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.901888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.901915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.902027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.902053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.902158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.902184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.902273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.902298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.902401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.902427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.442 qpair failed and we were unable to recover it. 00:24:19.442 [2024-07-15 10:41:07.902526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.442 [2024-07-15 10:41:07.902566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.902660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.902687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.902823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.902863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.902951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.902978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.903866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.903892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.904925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.904951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.443 [2024-07-15 10:41:07.905795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.905951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.905976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.906971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.906998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.907901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.907928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.908039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.908066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.908177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.908203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.443 [2024-07-15 10:41:07.908297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.443 [2024-07-15 10:41:07.908324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.443 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.908420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.908446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.908560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.908589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.908677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.908705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.908806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.908840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.908955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.908980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.909858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.909975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.910885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.910996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.911885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.911914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.912943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.912970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.913964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.913996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.914112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.914140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.914264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.914290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.914365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.914391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.914500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.914527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.914624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.444 [2024-07-15 10:41:07.914663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.444 qpair failed and we were unable to recover it. 00:24:19.444 [2024-07-15 10:41:07.914797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.914866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.914994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.915969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.915996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.916920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.916946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.917915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.917943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.918952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.918980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.919097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.919124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.919218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.919246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.919383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.919409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.919533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.919574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.919689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.919729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.919879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.919907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.920898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.920924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.921063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.921090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.921179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.921207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.921307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.921333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.921465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.921504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.921598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.921625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.445 qpair failed and we were unable to recover it. 00:24:19.445 [2024-07-15 10:41:07.921728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.445 [2024-07-15 10:41:07.921768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.921908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.921936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.922972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.923938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.923965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.924921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.924948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.925964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.925992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.926916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.926943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.927945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.927970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.928075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.928101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.928212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.928237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.928327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.446 [2024-07-15 10:41:07.928352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.446 qpair failed and we were unable to recover it. 00:24:19.446 [2024-07-15 10:41:07.928432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.928461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.928572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.928599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.928716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.928743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.928850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.928890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.929963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.929989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.930933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.930959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.931965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.931998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.932084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.932112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.932193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.932220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.932309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.932336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.932444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.447 [2024-07-15 10:41:07.932469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.447 qpair failed and we were unable to recover it. 00:24:19.447 [2024-07-15 10:41:07.932543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.932569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.932655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.932681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.932793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.932830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.932915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.932942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.933883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.933911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.934880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.934997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.935898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.935925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.936915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.936943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.448 qpair failed and we were unable to recover it. 00:24:19.448 [2024-07-15 10:41:07.937885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.448 [2024-07-15 10:41:07.937912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.938928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.938956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.939955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.939982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.940073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.940099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.940183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.940210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.940298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.940325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.449 [2024-07-15 10:41:07.940407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.449 [2024-07-15 10:41:07.940434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.449 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.940549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.940587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.940711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.940738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.940843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.940876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.940993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.941949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.941984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.942145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.942179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.942313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.942349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.942465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.942503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.942621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.942649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.942767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.942793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.942898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.942925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.943042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.943068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.943151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.943177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.943259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.943286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.943395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.731 [2024-07-15 10:41:07.943421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.731 qpair failed and we were unable to recover it. 00:24:19.731 [2024-07-15 10:41:07.943514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.943541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.943627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.943653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.943745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.943770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.943870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.943896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.943988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.944893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.944918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.945936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.945964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.946880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.946975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.947888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.947979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.948006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.948119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.948145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.948233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.948260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.948354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.948382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.732 [2024-07-15 10:41:07.948494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.732 [2024-07-15 10:41:07.948522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.732 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.948638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.948666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.948757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.948785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.948871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.948899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.948980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.949151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.949279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.949421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.949581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.949728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.949886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.949913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.950893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.950920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.951865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.951975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.952897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.952924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.733 qpair failed and we were unable to recover it. 00:24:19.733 [2024-07-15 10:41:07.953900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.733 [2024-07-15 10:41:07.953927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.954901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.954929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.955925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.955952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.956899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.956981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.957864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.957891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.958957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.958984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.959101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.959128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.734 [2024-07-15 10:41:07.959237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.734 [2024-07-15 10:41:07.959264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.734 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.959376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.959402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.959489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.959517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.959607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.959635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.959736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.959775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.959907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.959935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.960897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.960998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.961923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.961949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.962894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.962989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.963125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.963151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.963300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.963327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.963410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.963437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.963525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.963555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.735 qpair failed and we were unable to recover it. 00:24:19.735 [2024-07-15 10:41:07.963641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.735 [2024-07-15 10:41:07.963668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.963796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.963831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.963929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.963953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.964954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.964981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.965902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.965984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.966918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.966945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.967862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.967981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.968915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.736 [2024-07-15 10:41:07.968944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.736 qpair failed and we were unable to recover it. 00:24:19.736 [2024-07-15 10:41:07.969055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.969942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.969969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.970970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.970995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.971891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.971976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.972924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.972949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.973922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.973948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.974032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.974059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.974149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.974175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.974267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.737 [2024-07-15 10:41:07.974299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.737 qpair failed and we were unable to recover it. 00:24:19.737 [2024-07-15 10:41:07.974440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.974466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.974542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.974567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.974706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.974731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.974827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.974857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.974971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.974998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.975907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.975931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.976954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.976980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.977929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.977955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.978836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.978985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.979016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.979133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.979162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.979249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.979276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.979394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.979421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.979542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.979569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.738 [2024-07-15 10:41:07.979659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.738 [2024-07-15 10:41:07.979686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.738 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.979777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.979809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.979898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.979925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.980897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.980923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.981892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.981919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.982902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.982988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.983947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.983974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.984967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.984992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.985076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.739 [2024-07-15 10:41:07.985101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.739 qpair failed and we were unable to recover it. 00:24:19.739 [2024-07-15 10:41:07.985182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.985207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.985315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.985341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.985425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.985450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.985564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.985590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.985700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.985728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.985856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.985885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.985977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.986860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.986977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.987951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.987978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.988884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.988999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.989111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.989218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.989337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.989499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.989658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.740 [2024-07-15 10:41:07.989815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.740 [2024-07-15 10:41:07.989842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.740 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.989927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.989955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.990944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.990969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.991925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.991953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.992892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.992991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.993935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.993962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.994940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.994965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.995077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.995104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.995212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.741 [2024-07-15 10:41:07.995237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.741 qpair failed and we were unable to recover it. 00:24:19.741 [2024-07-15 10:41:07.995325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.995353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.995443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.995470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.995555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.995582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.995693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.995719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.995811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.995842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.995942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.995970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.996962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.996989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.997903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.997930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.998893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.998921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:07.999968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:07.999999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:08.000092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:08.000123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:08.000239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:08.000276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:08.000410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:08.000436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:08.000551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:08.000576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.742 qpair failed and we were unable to recover it. 00:24:19.742 [2024-07-15 10:41:08.000663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.742 [2024-07-15 10:41:08.000691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.000784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.000823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.000912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.000939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.001870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.001898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.002905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.002930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.003893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.003920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.004937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.004965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.005893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.743 [2024-07-15 10:41:08.005919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.743 qpair failed and we were unable to recover it. 00:24:19.743 [2024-07-15 10:41:08.006057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.006866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.006981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.007915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.007944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.008861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.008900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.009960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.009986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.010131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.010157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.010268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.010295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.010382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.010408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.010494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.744 [2024-07-15 10:41:08.010522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.744 qpair failed and we were unable to recover it. 00:24:19.744 [2024-07-15 10:41:08.010637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.010663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.010779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.010813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.010922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.010948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.011856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.011975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.012900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.012992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.013964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.013990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.014969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.014995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.015111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.015255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.015400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.015520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.015637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.745 [2024-07-15 10:41:08.015779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.745 qpair failed and we were unable to recover it. 00:24:19.745 [2024-07-15 10:41:08.015907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.015947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.016946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.016971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.017928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.017954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.018919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.018946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.019926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.019952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.746 [2024-07-15 10:41:08.020831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.746 qpair failed and we were unable to recover it. 00:24:19.746 [2024-07-15 10:41:08.020911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.020937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.747 [2024-07-15 10:41:08.021157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 [2024-07-15 10:41:08.021167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.747 [2024-07-15 10:41:08.021194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.747 [2024-07-15 10:41:08.021204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.747 [2024-07-15 10:41:08.021270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.021914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.021941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:19.747 [2024-07-15 10:41:08.022919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.022858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:19.747 [2024-07-15 10:41:08.022946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.022904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:19.747 [2024-07-15 10:41:08.022908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:19.747 [2024-07-15 10:41:08.023056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.023893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.023921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.024933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.024960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.025047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.025073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.025154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.025180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.025256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.747 [2024-07-15 10:41:08.025282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.747 qpair failed and we were unable to recover it. 00:24:19.747 [2024-07-15 10:41:08.025396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.025422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.025517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.025545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.025629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.025656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.025769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.025797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.025902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.025929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.026873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.026900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.027946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.027973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.028914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.028941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.748 [2024-07-15 10:41:08.029791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.748 [2024-07-15 10:41:08.029832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.748 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.029919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.029946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.030923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.030950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.031918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.031944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.032887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.032915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.033894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.033983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.749 qpair failed and we were unable to recover it. 00:24:19.749 [2024-07-15 10:41:08.034826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.749 [2024-07-15 10:41:08.034854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.034936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.034963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.035925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.035951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.036931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.036957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.037927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.037954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.038928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.038955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.750 [2024-07-15 10:41:08.039906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.750 qpair failed and we were unable to recover it. 00:24:19.750 [2024-07-15 10:41:08.039985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.040946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.040973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.041957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.041985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.042971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.043954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.043994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.044906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.044992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.045019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.045105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.045133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.045247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.045282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.045378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.045428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.751 qpair failed and we were unable to recover it. 00:24:19.751 [2024-07-15 10:41:08.045513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.751 [2024-07-15 10:41:08.045541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.045638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.045665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.045745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.045772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.045899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.045927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.046886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.046980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.047963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.047990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.048886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.048914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.049938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.049965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.050074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.050106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.050182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.050209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.752 qpair failed and we were unable to recover it. 00:24:19.752 [2024-07-15 10:41:08.050321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.752 [2024-07-15 10:41:08.050352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.050443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.050469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.050585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.050613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.050734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.050767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.050863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.050892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.050991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.051020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.052815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.052847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.052947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.052976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.053915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.053943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.054933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.054961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.055077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.055114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.055229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.055258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.055365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.055393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.057326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.057371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.057491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.057521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.057675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.057713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.057799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.057834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.057929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.057955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.753 [2024-07-15 10:41:08.058840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.753 qpair failed and we were unable to recover it. 00:24:19.753 [2024-07-15 10:41:08.058955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.058983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.059959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.059985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.060941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.060978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.061890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.061917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.062941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.062968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.063858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.063899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.064007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.064048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.754 qpair failed and we were unable to recover it. 00:24:19.754 [2024-07-15 10:41:08.064177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.754 [2024-07-15 10:41:08.064205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.064315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.064342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.064433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.064460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.064550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.064576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.064675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.064701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.064791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.064840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.064930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.064956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.065933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.065962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.066920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.066950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.067923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.067960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.068909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.068936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.069030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.069056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.069141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.755 [2024-07-15 10:41:08.069169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.755 qpair failed and we were unable to recover it. 00:24:19.755 [2024-07-15 10:41:08.069276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.069302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.069381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.069408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.069498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.069525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.069656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.069683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.069772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.069798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.069911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.069938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.070898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.070925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.071903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.071981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.072890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.072938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.073871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.073988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.074015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.074136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.074163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.074248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.074276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.756 qpair failed and we were unable to recover it. 00:24:19.756 [2024-07-15 10:41:08.074361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.756 [2024-07-15 10:41:08.074389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.074478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.074506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.074594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.074621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.074706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.074733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.074851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.074878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.074984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.075011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.075088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.075115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.075191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.075218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.075299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.075326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.075418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.075447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.075557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.757 [2024-07-15 10:41:08.075584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.757 qpair failed and we were unable to recover it. 00:24:19.757 [2024-07-15 10:41:08.075667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.075696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.075779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.075811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.075905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.075931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.076882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.076908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.077950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.077985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.078116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.078141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.078222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.780 [2024-07-15 10:41:08.078248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.780 qpair failed and we were unable to recover it. 00:24:19.780 [2024-07-15 10:41:08.078326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.078352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.078435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.078461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.078554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.078583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.078699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.078726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.078824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.078855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.078946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.078974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.079935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.079965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.080900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.080939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.081941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.081968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.082956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.082989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.083074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.083105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.083244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.781 [2024-07-15 10:41:08.083270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.781 qpair failed and we were unable to recover it. 00:24:19.781 [2024-07-15 10:41:08.083357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.083382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.083468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.083494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.083586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.083614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.083732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.083761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.083860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.083887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.083969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.083996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.084890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.084918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.085939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.085966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.086921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.086948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.087931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.088048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.088080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.088192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.088219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.088297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.088323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.782 qpair failed and we were unable to recover it. 00:24:19.782 [2024-07-15 10:41:08.088442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.782 [2024-07-15 10:41:08.088469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.088548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.088574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.088689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.088718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.088814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.088854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.088945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.088973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.089947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.089975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.090944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.090970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.091899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.091924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.092883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.092997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.093027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.093107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.093133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.093219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.093247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.093333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.783 [2024-07-15 10:41:08.093359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.783 qpair failed and we were unable to recover it. 00:24:19.783 [2024-07-15 10:41:08.093446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.093471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.093588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.093620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.093739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.093767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.093867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.093894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.093989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.094920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.094947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.095922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.095950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.784 [2024-07-15 10:41:08.096858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.784 qpair failed and we were unable to recover it. 00:24:19.784 [2024-07-15 10:41:08.096967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.096995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.097966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.098923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.098950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.099936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.099962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.100880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.100907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.785 [2024-07-15 10:41:08.101925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.785 [2024-07-15 10:41:08.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.785 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.102882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.102970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.103889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.103916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.104918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.104944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.105939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.105964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.106051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-07-15 10:41:08.106076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.786 qpair failed and we were unable to recover it. 00:24:19.786 [2024-07-15 10:41:08.106154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.106265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.106429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.106528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.106639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.106767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.106890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.106915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.107910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.107993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.108911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.108935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.109938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.109963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.110126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.110271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.110389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.110492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.110596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.787 [2024-07-15 10:41:08.110714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.787 qpair failed and we were unable to recover it. 00:24:19.787 [2024-07-15 10:41:08.110794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.110826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.110905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.110930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.111962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.111988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.112896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.112923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.113888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.113972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.114971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.114997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.788 [2024-07-15 10:41:08.115076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.788 [2024-07-15 10:41:08.115102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.788 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.115908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.115934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.116933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.116958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.117937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.118876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.789 [2024-07-15 10:41:08.118977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.789 [2024-07-15 10:41:08.119017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.789 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.119852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.119988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.120896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.120977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.121949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.121976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.122909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.122996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.123911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.790 qpair failed and we were unable to recover it. 00:24:19.790 [2024-07-15 10:41:08.123994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.790 [2024-07-15 10:41:08.124020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.124888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.124919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.125939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.125966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.126916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.126996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.127888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.127915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.128031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.128253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.128357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.128496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.791 [2024-07-15 10:41:08.128609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.791 qpair failed and we were unable to recover it. 00:24:19.791 [2024-07-15 10:41:08.128687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.128713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.128823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.128849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.128926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.128952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.129893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.129974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.130890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.130919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.131909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.131937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.132906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.132992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.133018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.133097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.133123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.133239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.133265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.133349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.133376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.133461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.133487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.792 [2024-07-15 10:41:08.133565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.792 [2024-07-15 10:41:08.133592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.792 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.133728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.133755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.133843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.133871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.133960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.133986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.134884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.134912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.135944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.135970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.136907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.136990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.793 [2024-07-15 10:41:08.137824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.793 qpair failed and we were unable to recover it. 00:24:19.793 [2024-07-15 10:41:08.137930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.137956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.138915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.138942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.139903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.139930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.140959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.140986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.141943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.141969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.142073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.142179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.142317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.142449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.142591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.794 [2024-07-15 10:41:08.142699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.794 qpair failed and we were unable to recover it. 00:24:19.794 [2024-07-15 10:41:08.142861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.142888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.142976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.143932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.143958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.144893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.144977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.145934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.145960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.146949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.146989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.147094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.147122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.147210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.147237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.147321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.147445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.147471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.147558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.795 [2024-07-15 10:41:08.147584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.795 qpair failed and we were unable to recover it. 00:24:19.795 [2024-07-15 10:41:08.147679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.147706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.147790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.147826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.147925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.147955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.148924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.148951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.149902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.149928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.150964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.150992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.151879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.151974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.152001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.152114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.152141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.152222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.152256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.152373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.152400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.152487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.152514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.796 [2024-07-15 10:41:08.152608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.796 [2024-07-15 10:41:08.152637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.796 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.152730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.152760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.152857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.152883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.152968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.152994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.153947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.153974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.154862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.154893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.155938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.155966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.156077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.156184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.156295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.156437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:19.797 [2024-07-15 10:41:08.156576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.797 [2024-07-15 10:41:08.156686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:19.797 [2024-07-15 10:41:08.156851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.156956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.156983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.797 [2024-07-15 10:41:08.157067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.157093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.157184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.157210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.157319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.157345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.797 qpair failed and we were unable to recover it. 00:24:19.797 [2024-07-15 10:41:08.157425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.797 [2024-07-15 10:41:08.157461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.157546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.157571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.157655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.157681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.157757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.157782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.157888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.157913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.158882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.158972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.159962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.159987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.160078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.160103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.160180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.160205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.798 [2024-07-15 10:41:08.160277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.798 [2024-07-15 10:41:08.160302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.798 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.160385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.160411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.160493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.160517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.160591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.160617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.160704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.160730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.160816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.160843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.160934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.160974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.161910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.161936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.162911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.162994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.799 [2024-07-15 10:41:08.163834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.799 qpair failed and we were unable to recover it. 00:24:19.799 [2024-07-15 10:41:08.163916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.163942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.164962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.164989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.165948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.165975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.166885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.166915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.167030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.167141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.167247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.167356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.167469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.800 [2024-07-15 10:41:08.167572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.800 qpair failed and we were unable to recover it. 00:24:19.800 [2024-07-15 10:41:08.167651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.167676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.167752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.167778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.167872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.167898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.167980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.168972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.168997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.169927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.169954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.170923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.170951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.171030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.171057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.171143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.171169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.171245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.801 [2024-07-15 10:41:08.171270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.801 qpair failed and we were unable to recover it. 00:24:19.801 [2024-07-15 10:41:08.171344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.171370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.171455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.171481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.171606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.171632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.171705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.171732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.171821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.171849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.171940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.171967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.172908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.172937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.173866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.173893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.802 [2024-07-15 10:41:08.174700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.802 [2024-07-15 10:41:08.174726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.802 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.174842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.174869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.174961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.174988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.175916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.175997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.176906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.176933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.177053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.177078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.177157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.177186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.177270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.177297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.177380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.177407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.177516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.803 [2024-07-15 10:41:08.177543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.803 qpair failed and we were unable to recover it. 00:24:19.803 [2024-07-15 10:41:08.177627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.177654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.177733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.177760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.177862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.177892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.177972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.178084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.178111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.178197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.804 [2024-07-15 10:41:08.178223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.178338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.178364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.804 [2024-07-15 10:41:08.178457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.178482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.178564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.804 [2024-07-15 10:41:08.178590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.178674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.178701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.804 [2024-07-15 10:41:08.178785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.178817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.178905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.178930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.179895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.179981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.804 qpair failed and we were unable to recover it. 00:24:19.804 [2024-07-15 10:41:08.180876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.804 [2024-07-15 10:41:08.180908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.180988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.181956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.181982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.182859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.182975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.183954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.183980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.184065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.184091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.184180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.184211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.184295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.184321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.805 [2024-07-15 10:41:08.184410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.805 [2024-07-15 10:41:08.184436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.805 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.184517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.184543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.184618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.184644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.184729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.184754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.184894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.184920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.184998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.185966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.185991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.186919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.186944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.187966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.187992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.806 qpair failed and we were unable to recover it. 00:24:19.806 [2024-07-15 10:41:08.188069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.806 [2024-07-15 10:41:08.188094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.188897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.188922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.189903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.189930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.190872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.190987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.191859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.807 [2024-07-15 10:41:08.191886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.807 qpair failed and we were unable to recover it. 00:24:19.807 [2024-07-15 10:41:08.192005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.192909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.192937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.193902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.193929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a60e0 (9): Bad file descriptor 00:24:19.808 [2024-07-15 10:41:08.194185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.194889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.194915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.808 [2024-07-15 10:41:08.195822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.808 [2024-07-15 10:41:08.195849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.808 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.195942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.195968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.196961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.196987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.197944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.197970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.198080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.198200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.198336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.198472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.198584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.809 [2024-07-15 10:41:08.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.809 qpair failed and we were unable to recover it. 00:24:19.809 [2024-07-15 10:41:08.198785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.198820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.198924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.198951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.199924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.199950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.200863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.200981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.201944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.201970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.202052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.202078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.202161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.202188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.202273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 Malloc0 00:24:19.810 [2024-07-15 10:41:08.202300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.810 [2024-07-15 10:41:08.202385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.810 [2024-07-15 10:41:08.202411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.810 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.202496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.202521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.202602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.202628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.811 [2024-07-15 10:41:08.202705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.202731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:19.811 [2024-07-15 10:41:08.202846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.202873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.811 [2024-07-15 10:41:08.202983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.811 [2024-07-15 10:41:08.203090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.203916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.203942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.204968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.204995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.205917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.811 [2024-07-15 10:41:08.205945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.811 qpair failed and we were unable to recover it. 00:24:19.811 [2024-07-15 10:41:08.206050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.812 [2024-07-15 10:41:08.206090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.206964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.206992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.207894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.207920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.208934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.208961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.812 qpair failed and we were unable to recover it. 00:24:19.812 [2024-07-15 10:41:08.209839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.812 [2024-07-15 10:41:08.209866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.209948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.209973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.210900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.210926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.211901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.211941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.212915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.212941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.813 qpair failed and we were unable to recover it. 00:24:19.813 [2024-07-15 10:41:08.213791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.813 [2024-07-15 10:41:08.213826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.213917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.213944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.214062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.214175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.814 [2024-07-15 10:41:08.214263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.214290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.814 [2024-07-15 10:41:08.214408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.814 [2024-07-15 10:41:08.214535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.814 [2024-07-15 10:41:08.214652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.214786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.214909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.214998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.215919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.215946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.216916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.216943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.814 qpair failed and we were unable to recover it. 00:24:19.814 [2024-07-15 10:41:08.217061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.814 [2024-07-15 10:41:08.217087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.217924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.217951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.218908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.218934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.815 [2024-07-15 10:41:08.219845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.815 qpair failed and we were unable to recover it. 00:24:19.815 [2024-07-15 10:41:08.219948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.219976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.220907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.220934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.221883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.221977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.816 [2024-07-15 10:41:08.222296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.816 [2024-07-15 10:41:08.222437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.816 [2024-07-15 10:41:08.222685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.222888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.222914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.223000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.223031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.223113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.223139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.816 [2024-07-15 10:41:08.223224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.816 [2024-07-15 10:41:08.223249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.816 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.223331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.223356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.223438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.223463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.223549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.223577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.223671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.223700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.223797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.223841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.223924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.223952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.224931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.224957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.225971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.225997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.817 qpair failed and we were unable to recover it. 00:24:19.817 [2024-07-15 10:41:08.226967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.817 [2024-07-15 10:41:08.226995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.227916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.227996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.228933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.228964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.229907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.229933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.230020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.230046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.230130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.230157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.230275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.818 [2024-07-15 10:41:08.230302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 [2024-07-15 10:41:08.230385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.230411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.818 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.818 [2024-07-15 10:41:08.230484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.818 [2024-07-15 10:41:08.230510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.818 qpair failed and we were unable to recover it. 00:24:19.819 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 [2024-07-15 10:41:08.230591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.230617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 [2024-07-15 10:41:08.230710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.230736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.230848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.230874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.230974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.230999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.231919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.231944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.232915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.232941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff70c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff71c000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff714000b90 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.233908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.233937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.234048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.819 [2024-07-15 10:41:08.234074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.819 qpair failed and we were unable to recover it. 00:24:19.819 [2024-07-15 10:41:08.234160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.820 [2024-07-15 10:41:08.234186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1098200 with addr=10.0.0.2, port=4420 00:24:19.820 qpair failed and we were unable to recover it. 00:24:19.820 [2024-07-15 10:41:08.234294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.820 [2024-07-15 10:41:08.236706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.820 [2024-07-15 10:41:08.236835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.820 [2024-07-15 10:41:08.236864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.820 [2024-07-15 10:41:08.236880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.820 [2024-07-15 10:41:08.236893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:19.820 [2024-07-15 10:41:08.236924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.820 qpair failed and we were unable to recover it. 00:24:19.820 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.820 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:19.820 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.820 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.820 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.820 10:41:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1304333 00:24:19.820 [2024-07-15 10:41:08.246617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.820 [2024-07-15 10:41:08.246705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.820 [2024-07-15 10:41:08.246731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.820 [2024-07-15 10:41:08.246745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.820 [2024-07-15 10:41:08.246758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:19.820 [2024-07-15 10:41:08.246786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.820 qpair failed and we were unable to recover it. 00:24:19.820 [2024-07-15 10:41:08.256638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.820 [2024-07-15 10:41:08.256729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.820 [2024-07-15 10:41:08.256766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.820 [2024-07-15 10:41:08.256783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.820 [2024-07-15 10:41:08.256818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:19.820 [2024-07-15 10:41:08.256852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.820 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.266644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.266744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.266773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.266789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.266819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.266850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.276642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.276730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.276755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.276769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.276793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.276830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.286733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.286841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.286867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.286883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.286902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.286930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.296681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.296768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.296793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.296816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.296838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.296867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.306669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.306766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.306791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.306813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.306828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.306861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.316720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.316817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.316843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.316858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.316871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.316906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.326725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.326819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.326845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.326860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.326873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.326902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.336736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.336830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.336856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.336871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.336883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.336911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.346836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.346938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.346964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.346979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.347005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.347035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.356854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.356946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.356972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.356988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.079 [2024-07-15 10:41:08.357001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.079 [2024-07-15 10:41:08.357030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.079 qpair failed and we were unable to recover it. 00:24:20.079 [2024-07-15 10:41:08.366876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.079 [2024-07-15 10:41:08.366961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.079 [2024-07-15 10:41:08.366987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.079 [2024-07-15 10:41:08.367002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.367015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.367043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.376886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.376971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.376997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.377013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.377026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.377054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.386944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.387043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.387067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.387082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.387094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.387123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.397049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.397187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.397212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.397227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.397240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.397269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.406984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.407073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.407099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.407114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.407127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.407155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.417026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.417113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.417141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.417158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.417171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.417200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.427002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.427095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.427120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.427136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.427149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.427178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.437092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.437189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.437214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.437235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.437249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.437278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.447080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.447166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.447191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.447206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.447219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.447247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.457133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.457211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.457236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.457251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.457264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.457292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.467126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.467219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.467244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.467259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.467272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.467300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.477199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.477320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.477344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.477359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.477372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.477400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.487193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.487276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.487301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.487316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.487329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.487358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.497305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.497390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.497414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.497432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.497444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.080 [2024-07-15 10:41:08.497472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.080 qpair failed and we were unable to recover it. 00:24:20.080 [2024-07-15 10:41:08.507266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.080 [2024-07-15 10:41:08.507359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.080 [2024-07-15 10:41:08.507384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.080 [2024-07-15 10:41:08.507399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.080 [2024-07-15 10:41:08.507412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.507439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.517341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.517431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.517455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.517470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.517483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.517511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.527343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.527426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.527451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.527471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.527485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.527513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.537348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.537429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.537454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.537469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.537481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.537510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.547486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.547578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.547603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.547618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.547630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.547658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.557396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.557489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.557513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.557528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.557541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.557569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.567432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.567514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.567538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.567552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.567564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.567592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.577472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.577561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.577587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.577604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.577616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.577645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.587508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.587598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.587624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.587640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.587652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.587680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.597513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.597619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.597645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.597660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.597672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.597699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.607564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.607651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.607675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.607689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.607701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.607729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.081 [2024-07-15 10:41:08.617577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.081 [2024-07-15 10:41:08.617681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.081 [2024-07-15 10:41:08.617707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.081 [2024-07-15 10:41:08.617727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.081 [2024-07-15 10:41:08.617740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.081 [2024-07-15 10:41:08.617767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.081 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.627613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.627710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.627746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.627773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.627793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.627840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.637614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.637716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.637743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.637759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.637771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.637807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.647665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.647779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.647814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.647842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.647855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.647883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.657714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.657797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.657826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.657841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.657854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.657882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.667773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.667874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.667900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.667915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.667927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.667955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.677815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.677911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.677936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.677951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.677963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.677991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.687786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.687886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.687911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.687926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.687938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.687965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.697797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.697892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.697916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.697929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.697941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.697969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.707866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.707971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.708003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.708019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.708031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.340 [2024-07-15 10:41:08.708058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.340 qpair failed and we were unable to recover it. 00:24:20.340 [2024-07-15 10:41:08.717953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.340 [2024-07-15 10:41:08.718045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.340 [2024-07-15 10:41:08.718074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.340 [2024-07-15 10:41:08.718091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.340 [2024-07-15 10:41:08.718104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.718132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.727896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.727980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.728005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.728020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.728032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.728059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.737950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.738084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.738109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.738124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.738136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.738163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.747966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.748069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.748095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.748110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.748122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.748154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.757983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.758109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.758135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.758150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.758162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.758189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.768048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.768172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.768197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.768211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.768223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.768250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.778055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.778137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.778161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.778175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.778187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.778214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.788113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.788235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.788261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.788276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.788288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.788316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.798136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.798241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.798270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.798285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.798298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.798325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.808122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.808204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.808228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.808243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.808255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.808282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.818142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.818225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.818249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.818263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.818275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.818303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.828221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.828316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.828344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.828361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.828373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.828403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.838224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.838314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.838339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.838353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.838365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.838398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.848227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.848324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.848349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.848363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.341 [2024-07-15 10:41:08.848375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.341 [2024-07-15 10:41:08.848403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.341 qpair failed and we were unable to recover it. 00:24:20.341 [2024-07-15 10:41:08.858266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.341 [2024-07-15 10:41:08.858355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.341 [2024-07-15 10:41:08.858380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.341 [2024-07-15 10:41:08.858394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.342 [2024-07-15 10:41:08.858407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.342 [2024-07-15 10:41:08.858434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.342 qpair failed and we were unable to recover it. 00:24:20.342 [2024-07-15 10:41:08.868303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.342 [2024-07-15 10:41:08.868390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.342 [2024-07-15 10:41:08.868415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.342 [2024-07-15 10:41:08.868430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.342 [2024-07-15 10:41:08.868442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.342 [2024-07-15 10:41:08.868470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.342 qpair failed and we were unable to recover it. 00:24:20.342 [2024-07-15 10:41:08.878309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.342 [2024-07-15 10:41:08.878408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.342 [2024-07-15 10:41:08.878433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.342 [2024-07-15 10:41:08.878447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.342 [2024-07-15 10:41:08.878460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.342 [2024-07-15 10:41:08.878487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.342 qpair failed and we were unable to recover it. 00:24:20.342 [2024-07-15 10:41:08.888413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.342 [2024-07-15 10:41:08.888511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.342 [2024-07-15 10:41:08.888543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.342 [2024-07-15 10:41:08.888560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.342 [2024-07-15 10:41:08.888572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.342 [2024-07-15 10:41:08.888601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.342 qpair failed and we were unable to recover it. 00:24:20.599 [2024-07-15 10:41:08.898388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.599 [2024-07-15 10:41:08.898470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.599 [2024-07-15 10:41:08.898501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.599 [2024-07-15 10:41:08.898516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.599 [2024-07-15 10:41:08.898529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.599 [2024-07-15 10:41:08.898558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.599 qpair failed and we were unable to recover it. 00:24:20.599 [2024-07-15 10:41:08.908429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.599 [2024-07-15 10:41:08.908520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.599 [2024-07-15 10:41:08.908545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.599 [2024-07-15 10:41:08.908560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.599 [2024-07-15 10:41:08.908573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.599 [2024-07-15 10:41:08.908600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.599 qpair failed and we were unable to recover it. 00:24:20.599 [2024-07-15 10:41:08.918469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.918560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.918584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.918599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.918611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.918639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.928519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.928604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.928628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.928642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.928654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.928687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.938522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.938610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.938634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.938648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.938661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.938688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.948556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.948642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.948666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.948680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.948692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.948720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.958589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.958675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.958699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.958713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.958726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.958753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.968592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.968677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.968702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.968716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.968728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.968755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.978652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.978731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.978773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.978789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.978807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.978837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.988637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.988736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.988761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.988775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.988788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.988823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:08.998673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:08.998756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:08.998781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:08.998796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:08.998816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:08.998846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.008704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.008784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:09.008816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:09.008831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:09.008843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:09.008870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.018738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.018828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:09.018852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:09.018866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:09.018883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:09.018912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.028762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.028859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:09.028885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:09.028899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:09.028912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:09.028939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.038947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.039046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:09.039074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:09.039088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:09.039100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:09.039127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.048892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.048987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:09.049014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:09.049029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:09.049042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:09.049071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.058886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.058984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.600 [2024-07-15 10:41:09.059009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.600 [2024-07-15 10:41:09.059023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.600 [2024-07-15 10:41:09.059035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.600 [2024-07-15 10:41:09.059062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.600 qpair failed and we were unable to recover it. 00:24:20.600 [2024-07-15 10:41:09.068934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.600 [2024-07-15 10:41:09.069032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.069058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.069073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.069085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.069112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.079003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.079101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.079126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.079141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.079153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.079181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.088950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.089032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.089057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.089072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.089084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.089112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.098954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.099039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.099063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.099078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.099090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.099117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.109041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.109131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.109154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.109168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.109185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.109213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.119020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.119116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.119141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.119155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.119168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.119195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.129061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.129196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.129224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.129240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.129252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.129279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.601 [2024-07-15 10:41:09.139098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.601 [2024-07-15 10:41:09.139188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.601 [2024-07-15 10:41:09.139214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.601 [2024-07-15 10:41:09.139229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.601 [2024-07-15 10:41:09.139241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.601 [2024-07-15 10:41:09.139269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.601 qpair failed and we were unable to recover it. 00:24:20.859 [2024-07-15 10:41:09.149230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.859 [2024-07-15 10:41:09.149339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.859 [2024-07-15 10:41:09.149367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.859 [2024-07-15 10:41:09.149383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.859 [2024-07-15 10:41:09.149395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.859 [2024-07-15 10:41:09.149424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.859 qpair failed and we were unable to recover it. 00:24:20.859 [2024-07-15 10:41:09.159141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.859 [2024-07-15 10:41:09.159239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.859 [2024-07-15 10:41:09.159266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.859 [2024-07-15 10:41:09.159282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.859 [2024-07-15 10:41:09.159294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.859 [2024-07-15 10:41:09.159322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.859 qpair failed and we were unable to recover it. 00:24:20.859 [2024-07-15 10:41:09.169221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.859 [2024-07-15 10:41:09.169314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.859 [2024-07-15 10:41:09.169344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.859 [2024-07-15 10:41:09.169360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.859 [2024-07-15 10:41:09.169373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.859 [2024-07-15 10:41:09.169402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.859 qpair failed and we were unable to recover it. 00:24:20.859 [2024-07-15 10:41:09.179213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.859 [2024-07-15 10:41:09.179296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.859 [2024-07-15 10:41:09.179320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.859 [2024-07-15 10:41:09.179334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.859 [2024-07-15 10:41:09.179346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.859 [2024-07-15 10:41:09.179375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.859 qpair failed and we were unable to recover it. 00:24:20.859 [2024-07-15 10:41:09.189265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.859 [2024-07-15 10:41:09.189350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.859 [2024-07-15 10:41:09.189374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.859 [2024-07-15 10:41:09.189388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.859 [2024-07-15 10:41:09.189400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.859 [2024-07-15 10:41:09.189428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.859 qpair failed and we were unable to recover it. 00:24:20.859 [2024-07-15 10:41:09.199284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.859 [2024-07-15 10:41:09.199371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.859 [2024-07-15 10:41:09.199395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.859 [2024-07-15 10:41:09.199415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.859 [2024-07-15 10:41:09.199427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.859 [2024-07-15 10:41:09.199455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.209286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.209373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.209402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.209417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.209429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.209457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.219338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.219429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.219454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.219469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.219481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.219508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.229382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.229497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.229522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.229537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.229549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.229576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.239459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.239568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.239593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.239608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.239620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.239647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.249389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.249487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.249512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.249527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.249539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.249567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.259435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.259515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.259539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.259553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.259565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.259592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.269460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.269550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.269578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.269592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.269604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.269631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.279490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.279577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.279601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.279615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.279627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.279654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.289506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.289587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.289611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.289630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.289642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.289670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.299563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.299645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.299670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.299684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.299696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.299723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.309598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.309702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.309728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.309742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.309754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.309782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.319592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.319682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.319710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.319725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.319737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.319764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.329657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.329781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.329814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.329831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.329843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.329871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.339685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.860 [2024-07-15 10:41:09.339765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.860 [2024-07-15 10:41:09.339789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.860 [2024-07-15 10:41:09.339812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.860 [2024-07-15 10:41:09.339825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.860 [2024-07-15 10:41:09.339853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.860 qpair failed and we were unable to recover it. 00:24:20.860 [2024-07-15 10:41:09.349725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.861 [2024-07-15 10:41:09.349865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.861 [2024-07-15 10:41:09.349891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.861 [2024-07-15 10:41:09.349905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.861 [2024-07-15 10:41:09.349917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.861 [2024-07-15 10:41:09.349945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.861 qpair failed and we were unable to recover it. 00:24:20.861 [2024-07-15 10:41:09.359726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.861 [2024-07-15 10:41:09.359867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.861 [2024-07-15 10:41:09.359896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.861 [2024-07-15 10:41:09.359912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.861 [2024-07-15 10:41:09.359924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.861 [2024-07-15 10:41:09.359953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.861 qpair failed and we were unable to recover it. 00:24:20.861 [2024-07-15 10:41:09.369760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.861 [2024-07-15 10:41:09.369869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.861 [2024-07-15 10:41:09.369899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.861 [2024-07-15 10:41:09.369915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.861 [2024-07-15 10:41:09.369928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.861 [2024-07-15 10:41:09.369957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.861 qpair failed and we were unable to recover it. 00:24:20.861 [2024-07-15 10:41:09.379782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.861 [2024-07-15 10:41:09.379877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.861 [2024-07-15 10:41:09.379902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.861 [2024-07-15 10:41:09.379922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.861 [2024-07-15 10:41:09.379934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.861 [2024-07-15 10:41:09.379961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.861 qpair failed and we were unable to recover it. 00:24:20.861 [2024-07-15 10:41:09.389824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.861 [2024-07-15 10:41:09.389914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.861 [2024-07-15 10:41:09.389938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.861 [2024-07-15 10:41:09.389952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.861 [2024-07-15 10:41:09.389964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.861 [2024-07-15 10:41:09.389991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.861 qpair failed and we were unable to recover it. 00:24:20.861 [2024-07-15 10:41:09.399840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.861 [2024-07-15 10:41:09.399928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.861 [2024-07-15 10:41:09.399953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.861 [2024-07-15 10:41:09.399968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.861 [2024-07-15 10:41:09.399980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:20.861 [2024-07-15 10:41:09.400007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.861 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.409911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.410018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.410048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.410066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.410078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.410107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.419892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.419984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.420011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.420026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.420038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.420066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.429942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.430032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.430060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.430074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.430087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.430115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.439940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.440035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.440061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.440076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.440088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.440115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.450065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.450153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.450179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.450194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.450206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.450233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.460028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.460154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.460179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.460193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.460205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.460232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.470057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.470152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.470184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.470200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.470211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.470239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.480095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.480211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.480236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.480251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.480263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.480290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.490128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.490209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.490236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.119 [2024-07-15 10:41:09.490251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.119 [2024-07-15 10:41:09.490264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.119 [2024-07-15 10:41:09.490292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.119 qpair failed and we were unable to recover it. 00:24:21.119 [2024-07-15 10:41:09.500126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.119 [2024-07-15 10:41:09.500256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.119 [2024-07-15 10:41:09.500281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.500296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.500308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.500335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.510207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.510346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.510371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.510385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.510398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.510431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.520171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.520258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.520281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.520296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.520308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.520334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.530262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.530367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.530392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.530406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.530418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.530445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.540339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.540426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.540451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.540465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.540477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.540504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.550287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.550378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.550401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.550415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.550427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.550455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.560279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.560366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.560400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.560416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.560428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.560456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.570307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.570416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.570441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.570456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.570468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.570495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.580396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.580506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.580531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.580546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.580557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.580584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.590383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.590473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.590502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.590516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.590528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.590555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.600440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.600560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.600584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.600599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.600611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.600643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.610425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.610504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.610528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.610542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.610554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.610580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.620454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.620540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.620571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.620587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.620599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.620626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.630480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.630571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.630599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.630614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.630626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.120 [2024-07-15 10:41:09.630653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.120 qpair failed and we were unable to recover it. 00:24:21.120 [2024-07-15 10:41:09.640575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.120 [2024-07-15 10:41:09.640681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.120 [2024-07-15 10:41:09.640706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.120 [2024-07-15 10:41:09.640721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.120 [2024-07-15 10:41:09.640734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.121 [2024-07-15 10:41:09.640762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.121 qpair failed and we were unable to recover it. 00:24:21.121 [2024-07-15 10:41:09.650535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.121 [2024-07-15 10:41:09.650619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.121 [2024-07-15 10:41:09.650650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.121 [2024-07-15 10:41:09.650666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.121 [2024-07-15 10:41:09.650679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.121 [2024-07-15 10:41:09.650706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.121 qpair failed and we were unable to recover it. 00:24:21.121 [2024-07-15 10:41:09.660585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.121 [2024-07-15 10:41:09.660674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.121 [2024-07-15 10:41:09.660699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.121 [2024-07-15 10:41:09.660714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.121 [2024-07-15 10:41:09.660726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.121 [2024-07-15 10:41:09.660753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.121 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.670622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.670716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.379 [2024-07-15 10:41:09.670746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.379 [2024-07-15 10:41:09.670761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.379 [2024-07-15 10:41:09.670773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.379 [2024-07-15 10:41:09.670809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.379 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.680627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.680716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.379 [2024-07-15 10:41:09.680742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.379 [2024-07-15 10:41:09.680756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.379 [2024-07-15 10:41:09.680768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.379 [2024-07-15 10:41:09.680796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.379 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.690656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.690741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.379 [2024-07-15 10:41:09.690767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.379 [2024-07-15 10:41:09.690781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.379 [2024-07-15 10:41:09.690793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.379 [2024-07-15 10:41:09.690833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.379 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.700675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.700761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.379 [2024-07-15 10:41:09.700785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.379 [2024-07-15 10:41:09.700799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.379 [2024-07-15 10:41:09.700823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.379 [2024-07-15 10:41:09.700851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.379 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.710746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.710864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.379 [2024-07-15 10:41:09.710890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.379 [2024-07-15 10:41:09.710904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.379 [2024-07-15 10:41:09.710916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.379 [2024-07-15 10:41:09.710944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.379 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.720784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.720876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.379 [2024-07-15 10:41:09.720901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.379 [2024-07-15 10:41:09.720916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.379 [2024-07-15 10:41:09.720928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.379 [2024-07-15 10:41:09.720955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.379 qpair failed and we were unable to recover it. 00:24:21.379 [2024-07-15 10:41:09.730745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.379 [2024-07-15 10:41:09.730850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.730876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.730890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.730902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.730929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.740775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.740903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.740933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.740949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.740961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.740988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.750925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.751015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.751040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.751055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.751067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.751094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.760898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.761006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.761031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.761045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.761057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.761084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.770911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.770998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.771023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.771037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.771049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.771075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.780922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.781049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.781075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.781089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.781107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.781134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.791036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.791155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.791183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.791199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.791211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.791239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.801008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.801130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.801155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.801169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.801183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.801211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.810994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.811079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.811103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.811116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.811129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.811156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.821002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.821082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.821106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.821120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.821133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.821160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.831052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.831147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.831171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.831185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.831198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.831226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.841054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.841140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.841164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.841179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.841191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.841218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.851096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.851178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.851202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.851216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.851228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.851255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.861224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.861313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.861337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.861352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.861364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.380 [2024-07-15 10:41:09.861391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.380 qpair failed and we were unable to recover it. 00:24:21.380 [2024-07-15 10:41:09.871163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.380 [2024-07-15 10:41:09.871299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.380 [2024-07-15 10:41:09.871325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.380 [2024-07-15 10:41:09.871340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.380 [2024-07-15 10:41:09.871357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.381 [2024-07-15 10:41:09.871384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.381 qpair failed and we were unable to recover it. 00:24:21.381 [2024-07-15 10:41:09.881245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.381 [2024-07-15 10:41:09.881358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.381 [2024-07-15 10:41:09.881384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.381 [2024-07-15 10:41:09.881400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.381 [2024-07-15 10:41:09.881412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.381 [2024-07-15 10:41:09.881440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.381 qpair failed and we were unable to recover it. 00:24:21.381 [2024-07-15 10:41:09.891219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.381 [2024-07-15 10:41:09.891302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.381 [2024-07-15 10:41:09.891326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.381 [2024-07-15 10:41:09.891340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.381 [2024-07-15 10:41:09.891353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.381 [2024-07-15 10:41:09.891380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.381 qpair failed and we were unable to recover it. 00:24:21.381 [2024-07-15 10:41:09.901260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.381 [2024-07-15 10:41:09.901350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.381 [2024-07-15 10:41:09.901374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.381 [2024-07-15 10:41:09.901389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.381 [2024-07-15 10:41:09.901401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.381 [2024-07-15 10:41:09.901429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.381 qpair failed and we were unable to recover it. 00:24:21.381 [2024-07-15 10:41:09.911309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.381 [2024-07-15 10:41:09.911423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.381 [2024-07-15 10:41:09.911449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.381 [2024-07-15 10:41:09.911464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.381 [2024-07-15 10:41:09.911477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.381 [2024-07-15 10:41:09.911504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.381 qpair failed and we were unable to recover it. 00:24:21.381 [2024-07-15 10:41:09.921288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.381 [2024-07-15 10:41:09.921385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.381 [2024-07-15 10:41:09.921409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.381 [2024-07-15 10:41:09.921425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.381 [2024-07-15 10:41:09.921437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.381 [2024-07-15 10:41:09.921465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.381 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.931361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.931474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.931504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.931522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.931535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.931564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.941377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.941463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.941492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.941508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.941521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.941549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.951419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.951506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.951531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.951547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.951560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.951588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.961476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.961561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.961586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.961606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.961620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.961648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.971482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.971609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.971635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.971651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.971663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.971692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.981531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.981656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.981683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.981698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.981710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.981739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:09.991528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:09.991630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:09.991656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:09.991671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:09.991684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:09.991712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:10.001574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:10.001697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:10.001724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:10.001748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:10.001761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:10.001790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:10.011613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:10.011711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:10.011737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:10.011752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:10.011764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.639 [2024-07-15 10:41:10.011793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.639 qpair failed and we were unable to recover it. 00:24:21.639 [2024-07-15 10:41:10.021663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.639 [2024-07-15 10:41:10.021782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.639 [2024-07-15 10:41:10.021840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.639 [2024-07-15 10:41:10.021868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.639 [2024-07-15 10:41:10.021893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.021940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.031696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.031855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.031892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.031915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.031933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.031972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.041698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.041811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.041846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.041872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.041894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.041934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.051718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.051818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.051850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.051873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.051887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.051916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.061719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.061811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.061838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.061853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.061865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.061894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.071811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.071912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.071938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.071952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.071965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.071993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.081786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.081910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.081937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.081952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.081965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.081994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.091807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.091936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.091962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.091977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.091989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.092017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.101847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.101931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.101959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.101974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.101986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.102014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.111892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.112007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.112032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.112046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.112059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.112087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.121986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.122073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.122098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.122113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.122126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.122154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.131941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.132027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.132055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.132071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.132083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.132111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.141996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.142131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.142156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.142176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.142190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.142217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.152028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.152118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.152143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.152157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.152170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.152198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.161995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.162093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.162118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.162133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.162145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.162173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.172043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.172133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.172157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.172171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.172184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.172213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.640 [2024-07-15 10:41:10.182055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.640 [2024-07-15 10:41:10.182144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.640 [2024-07-15 10:41:10.182169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.640 [2024-07-15 10:41:10.182184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.640 [2024-07-15 10:41:10.182197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.640 [2024-07-15 10:41:10.182225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.640 qpair failed and we were unable to recover it. 00:24:21.897 [2024-07-15 10:41:10.192102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.897 [2024-07-15 10:41:10.192196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.897 [2024-07-15 10:41:10.192223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.897 [2024-07-15 10:41:10.192243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.897 [2024-07-15 10:41:10.192266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.897 [2024-07-15 10:41:10.192307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.897 qpair failed and we were unable to recover it. 00:24:21.897 [2024-07-15 10:41:10.202140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.897 [2024-07-15 10:41:10.202232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.897 [2024-07-15 10:41:10.202258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.897 [2024-07-15 10:41:10.202273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.897 [2024-07-15 10:41:10.202286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.897 [2024-07-15 10:41:10.202314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.897 qpair failed and we were unable to recover it. 00:24:21.897 [2024-07-15 10:41:10.212191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.897 [2024-07-15 10:41:10.212313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.897 [2024-07-15 10:41:10.212338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.897 [2024-07-15 10:41:10.212353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.897 [2024-07-15 10:41:10.212366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.897 [2024-07-15 10:41:10.212393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.897 qpair failed and we were unable to recover it. 00:24:21.897 [2024-07-15 10:41:10.222262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.222347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.222372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.222387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.222400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.222427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.232252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.232341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.232374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.232389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.232402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.232430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.242237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.242368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.242393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.242408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.242421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.242449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.252290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.252378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.252403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.252418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.252431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.252458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.262385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.262484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.262509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.262523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.262536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.262564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.272326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.272413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.272437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.272452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.272464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.272492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.282450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.282538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.282563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.282577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.282590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.282618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.292394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.292479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.292504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.292519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.292531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.292558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.302417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.302551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.302575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.302590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.302603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.302631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.312452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.312547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.312571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.312586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.312598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.312626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.322484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.322573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.322602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.322618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.322631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.322658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.332586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.332682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.332708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.332724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.332737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.332765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.342552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.342686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.342711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.342726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.342740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.342767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.352566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.352657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.898 [2024-07-15 10:41:10.352682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.898 [2024-07-15 10:41:10.352696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.898 [2024-07-15 10:41:10.352709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.898 [2024-07-15 10:41:10.352737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.898 qpair failed and we were unable to recover it. 00:24:21.898 [2024-07-15 10:41:10.362616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.898 [2024-07-15 10:41:10.362727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.362753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.362768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.362780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.362821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.372622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.372702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.372727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.372741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.372754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.372781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.382683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.382769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.382795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.382822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.382842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.382872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.392680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.392813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.392839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.392854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.392866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.392894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.402756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.402903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.402928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.402944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.402956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.402984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.412746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.412836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.412866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.412882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.412895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.412924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.422755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.422856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.422882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.422897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.422910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.422938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.432797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.432896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.432920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.432935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.432947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.432976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-15 10:41:10.442833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.899 [2024-07-15 10:41:10.442962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.899 [2024-07-15 10:41:10.442987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.899 [2024-07-15 10:41:10.443001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.899 [2024-07-15 10:41:10.443014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:21.899 [2024-07-15 10:41:10.443042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.899 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.452900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.453009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.453036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.453051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.453064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.453097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.462883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.462972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.462997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.463011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.463023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.463052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.472930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.473027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.473052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.473067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.473080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.473108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.482916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.482998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.483023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.483038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.483050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.483077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.492995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.493086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.493112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.493126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.493139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.493166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.502983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.503066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.503096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.503112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.503124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.503153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.513045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.513182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.513220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.513235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.513248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.513276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.523039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.523124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.523148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.523163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.523175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.523203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.533060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.533153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.533177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.533192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.533205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.533232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.543131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.543220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.543244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.543259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.543277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.157 [2024-07-15 10:41:10.543306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-07-15 10:41:10.553122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.157 [2024-07-15 10:41:10.553214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.157 [2024-07-15 10:41:10.553242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.157 [2024-07-15 10:41:10.553258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.157 [2024-07-15 10:41:10.553271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.553300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.563139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.563234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.563258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.563273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.563285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.563313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.573167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.573259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.573283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.573298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.573311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.573338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.583330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.583420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.583445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.583459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.583472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.583500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.593265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.593356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.593381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.593396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.593408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.593435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.603287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.603377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.603402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.603417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.603430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.603458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.613300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.613391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.613419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.613435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.613448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.613476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.623393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.623475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.623500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.623515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.623528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.623556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.633336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.633428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.633452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.633467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.633485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.633513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.643397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.643540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.643568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.643585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.643598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.643641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.653479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.653599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.653624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.653639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.653653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.653680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.663424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.663509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.663533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.663547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.663560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.663588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.673465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.673558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.673582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.673597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.673610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.673638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.683563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.683659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.683683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.683698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.683710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.683738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.158 [2024-07-15 10:41:10.693506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.158 [2024-07-15 10:41:10.693594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.158 [2024-07-15 10:41:10.693619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.158 [2024-07-15 10:41:10.693633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.158 [2024-07-15 10:41:10.693646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.158 [2024-07-15 10:41:10.693673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.158 qpair failed and we were unable to recover it. 00:24:22.159 [2024-07-15 10:41:10.703536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.159 [2024-07-15 10:41:10.703629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.159 [2024-07-15 10:41:10.703656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.159 [2024-07-15 10:41:10.703672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.159 [2024-07-15 10:41:10.703684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.159 [2024-07-15 10:41:10.703713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.159 qpair failed and we were unable to recover it. 00:24:22.416 [2024-07-15 10:41:10.713580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.416 [2024-07-15 10:41:10.713676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.416 [2024-07-15 10:41:10.713702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.416 [2024-07-15 10:41:10.713718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.416 [2024-07-15 10:41:10.713731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.416 [2024-07-15 10:41:10.713760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.416 qpair failed and we were unable to recover it. 00:24:22.416 [2024-07-15 10:41:10.723672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.416 [2024-07-15 10:41:10.723762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.416 [2024-07-15 10:41:10.723788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.416 [2024-07-15 10:41:10.723811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.416 [2024-07-15 10:41:10.723832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.416 [2024-07-15 10:41:10.723861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.416 qpair failed and we were unable to recover it. 00:24:22.416 [2024-07-15 10:41:10.733626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.416 [2024-07-15 10:41:10.733714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.416 [2024-07-15 10:41:10.733739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.416 [2024-07-15 10:41:10.733754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.416 [2024-07-15 10:41:10.733766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.416 [2024-07-15 10:41:10.733794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.416 qpair failed and we were unable to recover it. 00:24:22.416 [2024-07-15 10:41:10.743681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.416 [2024-07-15 10:41:10.743762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.416 [2024-07-15 10:41:10.743788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.416 [2024-07-15 10:41:10.743809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.416 [2024-07-15 10:41:10.743825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.416 [2024-07-15 10:41:10.743853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.416 qpair failed and we were unable to recover it. 00:24:22.416 [2024-07-15 10:41:10.753690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.416 [2024-07-15 10:41:10.753784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.416 [2024-07-15 10:41:10.753815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.416 [2024-07-15 10:41:10.753831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.416 [2024-07-15 10:41:10.753844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.416 [2024-07-15 10:41:10.753873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.416 qpair failed and we were unable to recover it. 00:24:22.416 [2024-07-15 10:41:10.763691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.416 [2024-07-15 10:41:10.763833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.416 [2024-07-15 10:41:10.763860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.416 [2024-07-15 10:41:10.763876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.416 [2024-07-15 10:41:10.763889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.763918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.773735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.773840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.773868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.773883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.773896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.773927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.783777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.783872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.783900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.783917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.783929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.783958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.793831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.793920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.793945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.793959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.793972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.794001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.803847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.803975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.804002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.804017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.804030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.804058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.813880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.814007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.814034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.814055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.814068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.814095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.823896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.824010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.824036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.824051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.824064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.824092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.833938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.834027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.834051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.834066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.834078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.834107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.843961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.844053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.844078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.844093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.844106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.844134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.853999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.854091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.854119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.854135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.854149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.854178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.863998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.864083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.864108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.864123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.864136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.864164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.874051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.874140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.874164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.874179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.874191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.874220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.884069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.884181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.884212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.884227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.884240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.884268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.894105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.894243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.894268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.894282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.894295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.894323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.904210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.904300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.417 [2024-07-15 10:41:10.904329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.417 [2024-07-15 10:41:10.904350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.417 [2024-07-15 10:41:10.904364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.417 [2024-07-15 10:41:10.904392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.417 qpair failed and we were unable to recover it. 00:24:22.417 [2024-07-15 10:41:10.914154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.417 [2024-07-15 10:41:10.914283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.418 [2024-07-15 10:41:10.914308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.418 [2024-07-15 10:41:10.914323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.418 [2024-07-15 10:41:10.914335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.418 [2024-07-15 10:41:10.914363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.418 qpair failed and we were unable to recover it. 00:24:22.418 [2024-07-15 10:41:10.924179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.418 [2024-07-15 10:41:10.924261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.418 [2024-07-15 10:41:10.924286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.418 [2024-07-15 10:41:10.924301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.418 [2024-07-15 10:41:10.924314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.418 [2024-07-15 10:41:10.924343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.418 qpair failed and we were unable to recover it. 00:24:22.418 [2024-07-15 10:41:10.934206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.418 [2024-07-15 10:41:10.934333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.418 [2024-07-15 10:41:10.934357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.418 [2024-07-15 10:41:10.934372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.418 [2024-07-15 10:41:10.934384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.418 [2024-07-15 10:41:10.934412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.418 qpair failed and we were unable to recover it. 00:24:22.418 [2024-07-15 10:41:10.944317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.418 [2024-07-15 10:41:10.944407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.418 [2024-07-15 10:41:10.944432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.418 [2024-07-15 10:41:10.944447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.418 [2024-07-15 10:41:10.944459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.418 [2024-07-15 10:41:10.944487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.418 qpair failed and we were unable to recover it. 00:24:22.418 [2024-07-15 10:41:10.954347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.418 [2024-07-15 10:41:10.954455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.418 [2024-07-15 10:41:10.954480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.418 [2024-07-15 10:41:10.954495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.418 [2024-07-15 10:41:10.954508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.418 [2024-07-15 10:41:10.954536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.418 qpair failed and we were unable to recover it. 00:24:22.418 [2024-07-15 10:41:10.964326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.418 [2024-07-15 10:41:10.964446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.418 [2024-07-15 10:41:10.964473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.418 [2024-07-15 10:41:10.964489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.418 [2024-07-15 10:41:10.964501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.418 [2024-07-15 10:41:10.964531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.418 qpair failed and we were unable to recover it. 00:24:22.676 [2024-07-15 10:41:10.974366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.676 [2024-07-15 10:41:10.974454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.676 [2024-07-15 10:41:10.974481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.676 [2024-07-15 10:41:10.974496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.676 [2024-07-15 10:41:10.974509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.676 [2024-07-15 10:41:10.974538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.676 qpair failed and we were unable to recover it. 00:24:22.676 [2024-07-15 10:41:10.984339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.676 [2024-07-15 10:41:10.984425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.676 [2024-07-15 10:41:10.984451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.676 [2024-07-15 10:41:10.984466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.676 [2024-07-15 10:41:10.984478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.676 [2024-07-15 10:41:10.984507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.676 qpair failed and we were unable to recover it. 00:24:22.676 [2024-07-15 10:41:10.994412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.676 [2024-07-15 10:41:10.994540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.676 [2024-07-15 10:41:10.994572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.676 [2024-07-15 10:41:10.994588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:10.994601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:10.994629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.004451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.004577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.004602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.004617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.004630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.004659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.014471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.014562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.014587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.014602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.014614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.014642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.024455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.024535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.024560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.024574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.024587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.024615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.034491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.034588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.034613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.034628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.034641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.034668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.044514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.044611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.044636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.044651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.044664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.044692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.054599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.054713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.054738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.054754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.054767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.054795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.064670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.064758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.064783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.064797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.064818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.064846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.074626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.074722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.074746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.074761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.074773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.074808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.084730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.084865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.084897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.084913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.084926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.084954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.094683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.094816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.094843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.094859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.094872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.094899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.104786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.104883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.104909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.104925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.104937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.104965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.114839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.114960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.114987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.115003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.115015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.115043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.124749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.124845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.124875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.124890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.124902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.124935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.677 [2024-07-15 10:41:11.134864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.677 [2024-07-15 10:41:11.134955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.677 [2024-07-15 10:41:11.134981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.677 [2024-07-15 10:41:11.134996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.677 [2024-07-15 10:41:11.135009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.677 [2024-07-15 10:41:11.135036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.677 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.144833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.144919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.144945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.144960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.144972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.144999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.154866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.154955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.154981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.154997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.155009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.155036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.164885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.164984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.165010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.165026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.165039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.165067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.174928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.175018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.175049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.175065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.175078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.175106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.184979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.185063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.185088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.185103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.185115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.185143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.194992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.195116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.195141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.195157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.195169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.195197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.204988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.205082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.205106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.205121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.205133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.205161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.215032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.215119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.215145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.215161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.215173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.215206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.678 [2024-07-15 10:41:11.225114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.678 [2024-07-15 10:41:11.225244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.678 [2024-07-15 10:41:11.225274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.678 [2024-07-15 10:41:11.225291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.678 [2024-07-15 10:41:11.225303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.678 [2024-07-15 10:41:11.225332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.678 qpair failed and we were unable to recover it. 00:24:22.937 [2024-07-15 10:41:11.235151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.937 [2024-07-15 10:41:11.235252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.937 [2024-07-15 10:41:11.235280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.937 [2024-07-15 10:41:11.235295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.937 [2024-07-15 10:41:11.235308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.937 [2024-07-15 10:41:11.235338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.937 qpair failed and we were unable to recover it. 00:24:22.937 [2024-07-15 10:41:11.245149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.937 [2024-07-15 10:41:11.245250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.937 [2024-07-15 10:41:11.245279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.937 [2024-07-15 10:41:11.245297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.937 [2024-07-15 10:41:11.245310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.937 [2024-07-15 10:41:11.245340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.937 qpair failed and we were unable to recover it. 00:24:22.937 [2024-07-15 10:41:11.255142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.937 [2024-07-15 10:41:11.255229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.937 [2024-07-15 10:41:11.255253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.937 [2024-07-15 10:41:11.255268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.937 [2024-07-15 10:41:11.255280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.937 [2024-07-15 10:41:11.255308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.937 qpair failed and we were unable to recover it. 00:24:22.937 [2024-07-15 10:41:11.265273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.937 [2024-07-15 10:41:11.265364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.937 [2024-07-15 10:41:11.265395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.937 [2024-07-15 10:41:11.265412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.937 [2024-07-15 10:41:11.265426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.937 [2024-07-15 10:41:11.265454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.937 qpair failed and we were unable to recover it. 00:24:22.937 [2024-07-15 10:41:11.275258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.937 [2024-07-15 10:41:11.275354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.937 [2024-07-15 10:41:11.275379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.937 [2024-07-15 10:41:11.275394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.937 [2024-07-15 10:41:11.275407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.937 [2024-07-15 10:41:11.275435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.937 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.285272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.285391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.285417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.285433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.285446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.285473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.295307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.295407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.295433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.295448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.295461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.295490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.305290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.305379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.305403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.305418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.305435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.305464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.315401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.315508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.315534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.315549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.315561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.315588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.325381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.325510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.325536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.325552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.325564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.325592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.335474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.335580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.335607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.335622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.335635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.335662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.345412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.345532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.345558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.345573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.345586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.345613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.355427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.355523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.355548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.355562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.355575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.355602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.365461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.365556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.365582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.365597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.365610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.365638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.375471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.375591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.375616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.375632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.375644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.375671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.385557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.385675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.385700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.385716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.385728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.385755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.395556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.395653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.395678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.395693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.395710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.395739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.405629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.405740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.405765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.405780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.405793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.405831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.415604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.415696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.415721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.938 [2024-07-15 10:41:11.415735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.938 [2024-07-15 10:41:11.415748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.938 [2024-07-15 10:41:11.415775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.938 qpair failed and we were unable to recover it. 00:24:22.938 [2024-07-15 10:41:11.425637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.938 [2024-07-15 10:41:11.425761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.938 [2024-07-15 10:41:11.425787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.425809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.425824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.425853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:22.939 [2024-07-15 10:41:11.435668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.939 [2024-07-15 10:41:11.435816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.939 [2024-07-15 10:41:11.435842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.435858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.435871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.435899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:22.939 [2024-07-15 10:41:11.445734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.939 [2024-07-15 10:41:11.445860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.939 [2024-07-15 10:41:11.445886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.445902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.445914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.445942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:22.939 [2024-07-15 10:41:11.455730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.939 [2024-07-15 10:41:11.455821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.939 [2024-07-15 10:41:11.455846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.455860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.455873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.455901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:22.939 [2024-07-15 10:41:11.465738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.939 [2024-07-15 10:41:11.465858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.939 [2024-07-15 10:41:11.465884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.465899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.465912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.465940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:22.939 [2024-07-15 10:41:11.475780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.939 [2024-07-15 10:41:11.475892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.939 [2024-07-15 10:41:11.475918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.475933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.475946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.475973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:22.939 [2024-07-15 10:41:11.485783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.939 [2024-07-15 10:41:11.485886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.939 [2024-07-15 10:41:11.485915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.939 [2024-07-15 10:41:11.485932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.939 [2024-07-15 10:41:11.485960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:22.939 [2024-07-15 10:41:11.486006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.939 qpair failed and we were unable to recover it. 00:24:23.198 [2024-07-15 10:41:11.495827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.198 [2024-07-15 10:41:11.495914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.495941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.495957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.495970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.495998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.505891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.505984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.506010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.506025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.506038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.506067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.515911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.516018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.516044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.516060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.516073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.516106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.525929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.526061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.526086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.526109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.526122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.526150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.535963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.536046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.536071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.536086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.536106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.536134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.545988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.546079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.546103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.546118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.546131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.546158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.556014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.556119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.556142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.556157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.556169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.556202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.566048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.566162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.566186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.566200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.566213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.566240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.576079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.576197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.576221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.576241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.576255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.576283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.586076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.586208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.586234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.586249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.586262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.586289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.596135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.596225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.596249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.596263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.596275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.596303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.606131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.606223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.606248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.606263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.606275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.606303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.616151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.616237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.616261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.616276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.616288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.616316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.626181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.626272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.199 [2024-07-15 10:41:11.626298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.199 [2024-07-15 10:41:11.626314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.199 [2024-07-15 10:41:11.626327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.199 [2024-07-15 10:41:11.626356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.199 qpair failed and we were unable to recover it. 00:24:23.199 [2024-07-15 10:41:11.636203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.199 [2024-07-15 10:41:11.636297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.636322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.636337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.636350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.636378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.646306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.646423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.646449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.646464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.646477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.646505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.656260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.656354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.656379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.656395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.656409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.656437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.666284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.666370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.666394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.666414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.666427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.666455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.676364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.676477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.676503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.676518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.676530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.676558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.686409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.686506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.686531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.686547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.686559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.686587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.696375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.696464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.696488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.696503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.696516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.696544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.706401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.706490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.706516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.706531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.706544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.706572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.716457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.716562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.716588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.716603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.716616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.716644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.726482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.726603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.726628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.726644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.726656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.726684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.736525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.736648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.736675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.736690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.736703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.736730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.200 [2024-07-15 10:41:11.746572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.200 [2024-07-15 10:41:11.746664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.200 [2024-07-15 10:41:11.746692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.200 [2024-07-15 10:41:11.746709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.200 [2024-07-15 10:41:11.746722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.200 [2024-07-15 10:41:11.746751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.200 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.756687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.756795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.756831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.756854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.756868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.756898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.766711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.766847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.766874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.766889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.766903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.766930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.776615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.776741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.776767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.776783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.776796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.776833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.786637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.786762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.786789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.786810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.786824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.786853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.796675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.796788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.796822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.796838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.796851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.796879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.806696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.806782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.806813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.806829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.806842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.806871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.816731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.459 [2024-07-15 10:41:11.816826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.459 [2024-07-15 10:41:11.816851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.459 [2024-07-15 10:41:11.816865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.459 [2024-07-15 10:41:11.816878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.459 [2024-07-15 10:41:11.816906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.459 qpair failed and we were unable to recover it. 00:24:23.459 [2024-07-15 10:41:11.826786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.826879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.826903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.826918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.826931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.826959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.836778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.836916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.836942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.836957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.836970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.836998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.846854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.846963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.846995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.847012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.847025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.847053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.856832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.856917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.856942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.856957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.856969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.856998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.866909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.867008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.867033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.867048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.867060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.867089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.876939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.877033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.877057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.877071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.877084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.877112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.886962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.887091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.887118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.887135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.887148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.887181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.896962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.897048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.897073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.897087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.897107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.897135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.907024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.907117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.907144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.907160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.907173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.907201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.917020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.917111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.917136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.917150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.917162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.917191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.927100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.927200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.927225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.927239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.927253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.927281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.937086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.937170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.937200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.937215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.937229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.937257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.947085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.947174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.947198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.947213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.947226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.947253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.957138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.957226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.460 [2024-07-15 10:41:11.957251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.460 [2024-07-15 10:41:11.957265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.460 [2024-07-15 10:41:11.957278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.460 [2024-07-15 10:41:11.957306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.460 qpair failed and we were unable to recover it. 00:24:23.460 [2024-07-15 10:41:11.967137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.460 [2024-07-15 10:41:11.967225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.461 [2024-07-15 10:41:11.967249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.461 [2024-07-15 10:41:11.967264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.461 [2024-07-15 10:41:11.967277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.461 [2024-07-15 10:41:11.967305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.461 qpair failed and we were unable to recover it. 00:24:23.461 [2024-07-15 10:41:11.977158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.461 [2024-07-15 10:41:11.977244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.461 [2024-07-15 10:41:11.977269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.461 [2024-07-15 10:41:11.977283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.461 [2024-07-15 10:41:11.977296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.461 [2024-07-15 10:41:11.977329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.461 qpair failed and we were unable to recover it. 00:24:23.461 [2024-07-15 10:41:11.987223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.461 [2024-07-15 10:41:11.987313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.461 [2024-07-15 10:41:11.987337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.461 [2024-07-15 10:41:11.987352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.461 [2024-07-15 10:41:11.987365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.461 [2024-07-15 10:41:11.987393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.461 qpair failed and we were unable to recover it. 00:24:23.461 [2024-07-15 10:41:11.997217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.461 [2024-07-15 10:41:11.997308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.461 [2024-07-15 10:41:11.997333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.461 [2024-07-15 10:41:11.997347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.461 [2024-07-15 10:41:11.997360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.461 [2024-07-15 10:41:11.997387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.461 qpair failed and we were unable to recover it. 00:24:23.461 [2024-07-15 10:41:12.007247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.461 [2024-07-15 10:41:12.007333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.461 [2024-07-15 10:41:12.007360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.461 [2024-07-15 10:41:12.007376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.461 [2024-07-15 10:41:12.007389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.461 [2024-07-15 10:41:12.007418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.461 qpair failed and we were unable to recover it. 00:24:23.719 [2024-07-15 10:41:12.017267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.719 [2024-07-15 10:41:12.017357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.719 [2024-07-15 10:41:12.017384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.719 [2024-07-15 10:41:12.017399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.719 [2024-07-15 10:41:12.017412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.719 [2024-07-15 10:41:12.017440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.719 qpair failed and we were unable to recover it. 00:24:23.719 [2024-07-15 10:41:12.027313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.719 [2024-07-15 10:41:12.027398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.719 [2024-07-15 10:41:12.027427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.719 [2024-07-15 10:41:12.027442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.719 [2024-07-15 10:41:12.027455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.719 [2024-07-15 10:41:12.027483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.719 qpair failed and we were unable to recover it. 00:24:23.719 [2024-07-15 10:41:12.037330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.719 [2024-07-15 10:41:12.037422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.719 [2024-07-15 10:41:12.037448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.719 [2024-07-15 10:41:12.037463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.719 [2024-07-15 10:41:12.037475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.719 [2024-07-15 10:41:12.037503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.719 qpair failed and we were unable to recover it. 00:24:23.719 [2024-07-15 10:41:12.047401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.719 [2024-07-15 10:41:12.047521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.719 [2024-07-15 10:41:12.047545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.719 [2024-07-15 10:41:12.047571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.719 [2024-07-15 10:41:12.047584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.719 [2024-07-15 10:41:12.047612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.719 qpair failed and we were unable to recover it. 00:24:23.719 [2024-07-15 10:41:12.057390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.719 [2024-07-15 10:41:12.057479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.719 [2024-07-15 10:41:12.057504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.719 [2024-07-15 10:41:12.057518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.719 [2024-07-15 10:41:12.057532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.719 [2024-07-15 10:41:12.057560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.719 qpair failed and we were unable to recover it. 00:24:23.719 [2024-07-15 10:41:12.067412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.719 [2024-07-15 10:41:12.067513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.719 [2024-07-15 10:41:12.067538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.719 [2024-07-15 10:41:12.067553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.719 [2024-07-15 10:41:12.067566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.719 [2024-07-15 10:41:12.067599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.719 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.077479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.077573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.077598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.077613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.077625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.077653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.087530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.087639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.087663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.087678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.087691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.087719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.097533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.097625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.097650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.097665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.097678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.097706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.107546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.107639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.107667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.107684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.107696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.107726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.117580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.117673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.117702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.117718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.117731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.117760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.127603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.127692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.127717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.127732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.127745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.127774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.137609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.137703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.137729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.137743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.137756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.137783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.147668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.147769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.147794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.147818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.147832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.147861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.157669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.157759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.157783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.157798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.157826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.157855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.167710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.167796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.167828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.167844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.167857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.167884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.177729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.177822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.177848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.177863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.177876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.177903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.187765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.187860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.187884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.187899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.187912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.187939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.197785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.197886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.197911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.197926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.197939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.197967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.207830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.207971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.207998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.720 [2024-07-15 10:41:12.208015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.720 [2024-07-15 10:41:12.208028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.720 [2024-07-15 10:41:12.208057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.720 qpair failed and we were unable to recover it. 00:24:23.720 [2024-07-15 10:41:12.217899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.720 [2024-07-15 10:41:12.218003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.720 [2024-07-15 10:41:12.218027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.721 [2024-07-15 10:41:12.218041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.721 [2024-07-15 10:41:12.218054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.721 [2024-07-15 10:41:12.218082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.721 qpair failed and we were unable to recover it. 00:24:23.721 [2024-07-15 10:41:12.227963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.721 [2024-07-15 10:41:12.228051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.721 [2024-07-15 10:41:12.228075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.721 [2024-07-15 10:41:12.228090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.721 [2024-07-15 10:41:12.228102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.721 [2024-07-15 10:41:12.228130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.721 qpair failed and we were unable to recover it. 00:24:23.721 [2024-07-15 10:41:12.237909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.721 [2024-07-15 10:41:12.237997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.721 [2024-07-15 10:41:12.238021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.721 [2024-07-15 10:41:12.238036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.721 [2024-07-15 10:41:12.238050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.721 [2024-07-15 10:41:12.238077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.721 qpair failed and we were unable to recover it. 00:24:23.721 [2024-07-15 10:41:12.247978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.721 [2024-07-15 10:41:12.248091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.721 [2024-07-15 10:41:12.248118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.721 [2024-07-15 10:41:12.248133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.721 [2024-07-15 10:41:12.248152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.721 [2024-07-15 10:41:12.248180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.721 qpair failed and we were unable to recover it. 00:24:23.721 [2024-07-15 10:41:12.257976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.721 [2024-07-15 10:41:12.258067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.721 [2024-07-15 10:41:12.258091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.721 [2024-07-15 10:41:12.258105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.721 [2024-07-15 10:41:12.258118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.721 [2024-07-15 10:41:12.258146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.721 qpair failed and we were unable to recover it. 00:24:23.721 [2024-07-15 10:41:12.268006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.721 [2024-07-15 10:41:12.268103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.721 [2024-07-15 10:41:12.268131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.721 [2024-07-15 10:41:12.268147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.721 [2024-07-15 10:41:12.268160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.721 [2024-07-15 10:41:12.268190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.721 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.278028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.278124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.278151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.278167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.278180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.278209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.288148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.288236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.288261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.288277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.288290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.288318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.298065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.298157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.298182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.298197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.298211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.298239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.308108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.308199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.308225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.308240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.308253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.308281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.318160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.318284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.318311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.318328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.318341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.318369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.328261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.328369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.328396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.328412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.328425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.328453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.338204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.979 [2024-07-15 10:41:12.338292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.979 [2024-07-15 10:41:12.338319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.979 [2024-07-15 10:41:12.338341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.979 [2024-07-15 10:41:12.338355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.979 [2024-07-15 10:41:12.338383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.979 qpair failed and we were unable to recover it. 00:24:23.979 [2024-07-15 10:41:12.348229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.348349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.348375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.348391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.348403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.348432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.358238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.358326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.358351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.358366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.358379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.358406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.368302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.368391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.368416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.368432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.368445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.368473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.378289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.378391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.378417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.378433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.378446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.378474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.388344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.388428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.388453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.388468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.388481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.388508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.398356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.398450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.398475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.398490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.398503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.398531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.408364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.408459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.408483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.408498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.408511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.408539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.418421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.418515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.418540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.418555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.418568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.418595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.428477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.428583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.428608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.428628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.428642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.428670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.438464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.438552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.438576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.438591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.438604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.438631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.448494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.448582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.448607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.448622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.448635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.448663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.458507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.458639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.458663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.458677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.458691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.458718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.468590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.468678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.468703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.468718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.468731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.468759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.478590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.478681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.478706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.980 [2024-07-15 10:41:12.478720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.980 [2024-07-15 10:41:12.478733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.980 [2024-07-15 10:41:12.478761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.980 qpair failed and we were unable to recover it. 00:24:23.980 [2024-07-15 10:41:12.488586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.980 [2024-07-15 10:41:12.488681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.980 [2024-07-15 10:41:12.488706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.981 [2024-07-15 10:41:12.488721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.981 [2024-07-15 10:41:12.488733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.981 [2024-07-15 10:41:12.488761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.981 qpair failed and we were unable to recover it. 00:24:23.981 [2024-07-15 10:41:12.498642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.981 [2024-07-15 10:41:12.498769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.981 [2024-07-15 10:41:12.498794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.981 [2024-07-15 10:41:12.498816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.981 [2024-07-15 10:41:12.498830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.981 [2024-07-15 10:41:12.498858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.981 qpair failed and we were unable to recover it. 00:24:23.981 [2024-07-15 10:41:12.508646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.981 [2024-07-15 10:41:12.508754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.981 [2024-07-15 10:41:12.508779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.981 [2024-07-15 10:41:12.508794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.981 [2024-07-15 10:41:12.508813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.981 [2024-07-15 10:41:12.508842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.981 qpair failed and we were unable to recover it. 00:24:23.981 [2024-07-15 10:41:12.518710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.981 [2024-07-15 10:41:12.518828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.981 [2024-07-15 10:41:12.518853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.981 [2024-07-15 10:41:12.518875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.981 [2024-07-15 10:41:12.518889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:23.981 [2024-07-15 10:41:12.518917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.981 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.528711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.528810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.528838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.528854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.528878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.528913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.538782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.538910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.538947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.538962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.538975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.539003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.548771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.548875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.548900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.548916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.548928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.548957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.558829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.558927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.558955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.558972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.558985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.559014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.568823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.568914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.568939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.568954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.568966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.568995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.578907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.579011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.579036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.579050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.579063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.579091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.588901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.588996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.589021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.589036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.589049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.589077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-07-15 10:41:12.598922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.239 [2024-07-15 10:41:12.599032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.239 [2024-07-15 10:41:12.599057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.239 [2024-07-15 10:41:12.599071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.239 [2024-07-15 10:41:12.599084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.239 [2024-07-15 10:41:12.599112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.608938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.609028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.609058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.609074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.609087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.609115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.618971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.619057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.619081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.619097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.619109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.619137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.628986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.629072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.629097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.629112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.629125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.629152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.639050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.639180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.639205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.639220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.639232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.639260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.649060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.649149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.649175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.649189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.649202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.649229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.659102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.659186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.659211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.659225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.659238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.659266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.669101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.669187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.669211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.669226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.669239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.669266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.679151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.679248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.679273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.679288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.679300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.679328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.689159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.689242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.689267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.689282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.689295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.689323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.699215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.699300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.699330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.699346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.699358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.699387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.709220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.709352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.709377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.709391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.709404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.709431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.719251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.719386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.719410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.719425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.719438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.719465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.729302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.729396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.729421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.729436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.729448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.729476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.739295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.240 [2024-07-15 10:41:12.739383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.240 [2024-07-15 10:41:12.739407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.240 [2024-07-15 10:41:12.739422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.240 [2024-07-15 10:41:12.739435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.240 [2024-07-15 10:41:12.739467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-07-15 10:41:12.749379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.241 [2024-07-15 10:41:12.749466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.241 [2024-07-15 10:41:12.749491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.241 [2024-07-15 10:41:12.749506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.241 [2024-07-15 10:41:12.749519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.241 [2024-07-15 10:41:12.749546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-07-15 10:41:12.759449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.241 [2024-07-15 10:41:12.759544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.241 [2024-07-15 10:41:12.759568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.241 [2024-07-15 10:41:12.759583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.241 [2024-07-15 10:41:12.759596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.241 [2024-07-15 10:41:12.759624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-07-15 10:41:12.769378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.241 [2024-07-15 10:41:12.769471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.241 [2024-07-15 10:41:12.769495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.241 [2024-07-15 10:41:12.769510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.241 [2024-07-15 10:41:12.769522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.241 [2024-07-15 10:41:12.769550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-07-15 10:41:12.779509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.241 [2024-07-15 10:41:12.779628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.241 [2024-07-15 10:41:12.779652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.241 [2024-07-15 10:41:12.779666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.241 [2024-07-15 10:41:12.779679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.241 [2024-07-15 10:41:12.779707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.789455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.789583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.789615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.789631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.789644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.789672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.799617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.799746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.799773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.799788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.799808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.799840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.809497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.809586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.809611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.809626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.809639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.809667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.819549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.819653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.819677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.819692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.819705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.819733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.829560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.829695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.829719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.829735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.829747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.829794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.839628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.839751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.839775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.839790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.839810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.839840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.849641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.849739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.849764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.849779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.849791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.849827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.859639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.859724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.859748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.859762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.859775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.859811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.869693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.869821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.869848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.869863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.869876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.869904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.879697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.879790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.879826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.879843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.879856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.879884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.889721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.499 [2024-07-15 10:41:12.889816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.499 [2024-07-15 10:41:12.889840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.499 [2024-07-15 10:41:12.889855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.499 [2024-07-15 10:41:12.889868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.499 [2024-07-15 10:41:12.889896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.499 qpair failed and we were unable to recover it. 00:24:24.499 [2024-07-15 10:41:12.899799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.899920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.899946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.899962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.899974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.900003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.909790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.909877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.909901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.909915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.909928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.909956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.919837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.919926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.919950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.919964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.919982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.920010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.929856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.929948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.929972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.929987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.929999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.930027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.939969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.940046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.940070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.940084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.940097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.940124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.949911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.949994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.950018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.950033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.950046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.950073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.960059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.960151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.960179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.960196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.960209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.960238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.969978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.970080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.970104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.970120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.970132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.970160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.980020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.980146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.980172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.980187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.980200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.980227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:12.990049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:12.990131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:12.990155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:12.990169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:12.990182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:12.990210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:13.000049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:13.000137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:13.000161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:13.000175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:13.000188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:13.000216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:13.010066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:13.010153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:13.010177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:13.010192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:13.010210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:13.010239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:13.020116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:13.020200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:13.020224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:13.020239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:13.020252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:13.020279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:13.030135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:13.030223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:13.030247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.500 [2024-07-15 10:41:13.030262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.500 [2024-07-15 10:41:13.030275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.500 [2024-07-15 10:41:13.030302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.500 qpair failed and we were unable to recover it. 00:24:24.500 [2024-07-15 10:41:13.040194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.500 [2024-07-15 10:41:13.040284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.500 [2024-07-15 10:41:13.040308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.501 [2024-07-15 10:41:13.040323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.501 [2024-07-15 10:41:13.040336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.501 [2024-07-15 10:41:13.040364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.501 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.050247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.050345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.050376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.050393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.050406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.050435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.060227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.060368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.060394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.060410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.060422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.060451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.070292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.070376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.070401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.070416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.070429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.070457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.080310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.080399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.080423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.080439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.080452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.080480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.090300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.090389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.090414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.090429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.090442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.090470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.100342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.100424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.100449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.100464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.100482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.100510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.110397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.110483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.110508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.110523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.110536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.110564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.120424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.120513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.120537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.120552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.120564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.120592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.130449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.130569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.130598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.130614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.130627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.130655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.140483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.140601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.140628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.140643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.758 [2024-07-15 10:41:13.140655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.758 [2024-07-15 10:41:13.140683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.758 qpair failed and we were unable to recover it. 00:24:24.758 [2024-07-15 10:41:13.150488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.758 [2024-07-15 10:41:13.150573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.758 [2024-07-15 10:41:13.150598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.758 [2024-07-15 10:41:13.150612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.150625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.150652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.160547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.160673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.160698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.160713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.160726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.160754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.170570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.170677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.170704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.170719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.170732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.170759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.180656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.180739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.180763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.180777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.180789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.180824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.190597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.190677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.190702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.190721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.190735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.190763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.200623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.200715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.200739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.200754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.200767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.200794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.210655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.210741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.210765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.210780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.210793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.210827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.220826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.220951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.220977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.220992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.221005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.221032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.230725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.230818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.230844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.230861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.230874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.230902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.240738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.240832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.240857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.240871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.240884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.240912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.250752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.250833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.250857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.250872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.250885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.250912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.260798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.260893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.260917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.260932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.260945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.260972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.270853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.270963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.270990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.271005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.271017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.271045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.280969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.281056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.281080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.281102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.281115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.281143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.290932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.291018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.291042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.291056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.291069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.291097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:24.759 [2024-07-15 10:41:13.300909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.759 [2024-07-15 10:41:13.301008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.759 [2024-07-15 10:41:13.301034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.759 [2024-07-15 10:41:13.301049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.759 [2024-07-15 10:41:13.301062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:24.759 [2024-07-15 10:41:13.301089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.759 qpair failed and we were unable to recover it. 00:24:25.016 [2024-07-15 10:41:13.310965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.016 [2024-07-15 10:41:13.311091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.016 [2024-07-15 10:41:13.311119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.016 [2024-07-15 10:41:13.311148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.016 [2024-07-15 10:41:13.311173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.016 [2024-07-15 10:41:13.311216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.016 qpair failed and we were unable to recover it. 00:24:25.016 [2024-07-15 10:41:13.320986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.016 [2024-07-15 10:41:13.321077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.016 [2024-07-15 10:41:13.321102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.016 [2024-07-15 10:41:13.321117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.016 [2024-07-15 10:41:13.321130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.016 [2024-07-15 10:41:13.321158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.016 qpair failed and we were unable to recover it. 00:24:25.016 [2024-07-15 10:41:13.331060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.016 [2024-07-15 10:41:13.331179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.016 [2024-07-15 10:41:13.331206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.016 [2024-07-15 10:41:13.331222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.016 [2024-07-15 10:41:13.331234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.331262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.341025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.341112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.341137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.341152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.341164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.341192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.351058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.351194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.351220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.351236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.351249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.351276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.361138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.361255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.361281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.361296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.361309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.361336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.371116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.371207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.371237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.371253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.371266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.371293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.381161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.381272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.381298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.381312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.381325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.381353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.391295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.391422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.391448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.391464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.391477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.391505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.401240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.401335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.401359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.401374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.401387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.401415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.411265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.411367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.411392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.411408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.411421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.411449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.421276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.421366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.421390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.421405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.421417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.421445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.431299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.431391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.431415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.431429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.431442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.431470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.441310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.441399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.441422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.441437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.441450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.441479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.451352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.451441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.451465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.451480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.451492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.451519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.461381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.461510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.461540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.461556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.461569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.017 [2024-07-15 10:41:13.461596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.017 qpair failed and we were unable to recover it. 00:24:25.017 [2024-07-15 10:41:13.471420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.017 [2024-07-15 10:41:13.471514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.017 [2024-07-15 10:41:13.471540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.017 [2024-07-15 10:41:13.471556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.017 [2024-07-15 10:41:13.471569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.471597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.481470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.481587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.481613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.481628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.481641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.481668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.491471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.491567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.491591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.491606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.491619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.491647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.501566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.501658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.501686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.501703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.501715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.501750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.511493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.511583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.511608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.511623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.511636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.511663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.521543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.521640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.521664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.521679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.521692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.521719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.531579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.531678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.531704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.531720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.531733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.531760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.541625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.541752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.541779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.541794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.541820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.541849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.551621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.551713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.551743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.551760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.551772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.551806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.018 [2024-07-15 10:41:13.561642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.018 [2024-07-15 10:41:13.561736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.018 [2024-07-15 10:41:13.561761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.018 [2024-07-15 10:41:13.561775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.018 [2024-07-15 10:41:13.561788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.018 [2024-07-15 10:41:13.561826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.018 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.571667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.571763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.571809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.571829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.571842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.571872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.581697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.581844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.581871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.581886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.581899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.581928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.591826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.591922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.591948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.591964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.591977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.592010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.601785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.601899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.601925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.601941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.601953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.601981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.611893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.612032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.612058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.612074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.612086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.612114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.621821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.621906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.621931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.621946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.621959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.621987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.631887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.632015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.632041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.632056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.632068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.632095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.641890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.641985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.642015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.642031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.642044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.642071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.651933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.652027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.652053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.652068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.652080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.652109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.661936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.662024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.662049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.662063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.662075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.662103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.671990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.672075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.672099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.672114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.672126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.672154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.682001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.682146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.682172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.682188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.682205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.682233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.692057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.692179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.692205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.692220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.692232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.692260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.702043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.702137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.702162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.702177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.702190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.702217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.712064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.712153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.712177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.712191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.712203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.712231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.722102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.722193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.722216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.722230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.722243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.722270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.732162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.732293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.732319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.732334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.732346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.732374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.742150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.742237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.742261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.742275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.742288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.742315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.752215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.752299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.752324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.752338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.752350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.752379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.762224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.762309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.762333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.762348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.762360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.762389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.772255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.772337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.772362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.772377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.772395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.772423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.782267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.782399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.782424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.782438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.782451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.782479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.792342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.792424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.792449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.792464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.276 [2024-07-15 10:41:13.792476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.276 [2024-07-15 10:41:13.792504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.276 qpair failed and we were unable to recover it. 00:24:25.276 [2024-07-15 10:41:13.802352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.276 [2024-07-15 10:41:13.802442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.276 [2024-07-15 10:41:13.802466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.276 [2024-07-15 10:41:13.802481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.277 [2024-07-15 10:41:13.802493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.277 [2024-07-15 10:41:13.802521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.277 qpair failed and we were unable to recover it. 00:24:25.277 [2024-07-15 10:41:13.812411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.277 [2024-07-15 10:41:13.812529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.277 [2024-07-15 10:41:13.812553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.277 [2024-07-15 10:41:13.812569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.277 [2024-07-15 10:41:13.812581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.277 [2024-07-15 10:41:13.812609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.277 qpair failed and we were unable to recover it. 00:24:25.277 [2024-07-15 10:41:13.822412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.277 [2024-07-15 10:41:13.822505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.277 [2024-07-15 10:41:13.822531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.277 [2024-07-15 10:41:13.822546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.277 [2024-07-15 10:41:13.822559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.277 [2024-07-15 10:41:13.822587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.277 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.832412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.832532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.832558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.832574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.832587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.832617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.842445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.842534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.842559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.842573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.842586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.842614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.852497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.852625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.852650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.852666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.852678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.852706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.862511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.862598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.862623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.862637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.862655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.862684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.872516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.872616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.872642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.872656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.872669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.872696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.882566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.882657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.882685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.882702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.882715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.882743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.892595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.892688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.892713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.892729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.892742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.892769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.902741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.902833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.902859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.902874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.902887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.902915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.912636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.912722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.912747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.912762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.912774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.912809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.922708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.922806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.922831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.922847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.922860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.922887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.932704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.535 [2024-07-15 10:41:13.932799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.535 [2024-07-15 10:41:13.932830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.535 [2024-07-15 10:41:13.932845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.535 [2024-07-15 10:41:13.932858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.535 [2024-07-15 10:41:13.932887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.535 qpair failed and we were unable to recover it. 00:24:25.535 [2024-07-15 10:41:13.942724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:13.942820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:13.942848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:13.942863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:13.942876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:13.942904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:13.952759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:13.952855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:13.952882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:13.952903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:13.952916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:13.952945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:13.962789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:13.962929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:13.962954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:13.962969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:13.962981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:13.963010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:13.972872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:13.972961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:13.972987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:13.973001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:13.973014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:13.973041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:13.982862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:13.982948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:13.982973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:13.982987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:13.982999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:13.983027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:13.992882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:13.993005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:13.993030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:13.993045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:13.993058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:13.993087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.002919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.003019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.003044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.003058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.003071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:14.003099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.012954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.013074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.013099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.013114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.013126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:14.013155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.023022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.023137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.023162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.023177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.023190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:14.023218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.032983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.033062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.033087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.033101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.033114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:14.033141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.043014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.043105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.043130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.043152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.043165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:14.043193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.053029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.053130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.053155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.053170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.053183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1098200 00:24:25.536 [2024-07-15 10:41:14.053211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.063090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.063176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.063207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.063223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.063237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff714000b90 00:24:25.536 [2024-07-15 10:41:14.063268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.073133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.073239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.073265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.073280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.073293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff714000b90 00:24:25.536 [2024-07-15 10:41:14.073322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.536 [2024-07-15 10:41:14.083184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.536 [2024-07-15 10:41:14.083318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.536 [2024-07-15 10:41:14.083348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.536 [2024-07-15 10:41:14.083364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.536 [2024-07-15 10:41:14.083376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff71c000b90 00:24:25.536 [2024-07-15 10:41:14.083407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.536 qpair failed and we were unable to recover it. 00:24:25.794 [2024-07-15 10:41:14.093226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.794 [2024-07-15 10:41:14.093312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.794 [2024-07-15 10:41:14.093340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.794 [2024-07-15 10:41:14.093355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.794 [2024-07-15 10:41:14.093368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff71c000b90 00:24:25.794 [2024-07-15 10:41:14.093398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.794 qpair failed and we were unable to recover it. 00:24:25.794 [2024-07-15 10:41:14.103203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.794 [2024-07-15 10:41:14.103285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.794 [2024-07-15 10:41:14.103316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.794 [2024-07-15 10:41:14.103332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.794 [2024-07-15 10:41:14.103344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff70c000b90 00:24:25.794 [2024-07-15 10:41:14.103375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:25.794 qpair failed and we were unable to recover it. 00:24:25.794 [2024-07-15 10:41:14.113211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.794 [2024-07-15 10:41:14.113297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.794 [2024-07-15 10:41:14.113324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.794 [2024-07-15 10:41:14.113338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.794 [2024-07-15 10:41:14.113351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff70c000b90 00:24:25.794 [2024-07-15 10:41:14.113380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:25.794 qpair failed and we were unable to recover it. 00:24:25.794 [2024-07-15 10:41:14.113484] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:25.794 A controller has encountered a failure and is being reset. 00:24:25.794 Controller properly reset. 00:24:25.794 Initializing NVMe Controllers 00:24:25.794 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:25.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:25.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:25.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:25.794 Initialization complete. Launching workers. 00:24:25.794 Starting thread on core 1 00:24:25.794 Starting thread on core 2 00:24:25.794 Starting thread on core 3 00:24:25.794 Starting thread on core 0 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:25.794 00:24:25.794 real 0m10.798s 00:24:25.794 user 0m19.122s 00:24:25.794 sys 0m5.102s 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.794 ************************************ 00:24:25.794 END TEST nvmf_target_disconnect_tc2 00:24:25.794 ************************************ 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.794 rmmod nvme_tcp 00:24:25.794 rmmod nvme_fabrics 00:24:25.794 rmmod nvme_keyring 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1304862 ']' 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1304862 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1304862 ']' 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1304862 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1304862 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1304862' 00:24:25.794 killing process with pid 1304862 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1304862 00:24:25.794 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1304862 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.053 10:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.581 10:41:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.581 00:24:28.581 real 0m15.685s 00:24:28.581 user 0m45.453s 00:24:28.581 sys 0m7.093s 00:24:28.581 10:41:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.581 10:41:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:28.581 ************************************ 00:24:28.581 END TEST nvmf_target_disconnect 00:24:28.581 ************************************ 00:24:28.581 10:41:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:28.582 10:41:16 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:28.582 10:41:16 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.582 10:41:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 10:41:16 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:28.582 00:24:28.582 real 19m4.033s 00:24:28.582 user 45m6.030s 00:24:28.582 sys 4m45.109s 00:24:28.582 10:41:16 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.582 10:41:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 ************************************ 00:24:28.582 END TEST nvmf_tcp 00:24:28.582 ************************************ 00:24:28.582 10:41:16 -- common/autotest_common.sh@1142 -- # return 0 00:24:28.582 10:41:16 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:24:28.582 10:41:16 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:28.582 10:41:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:28.582 10:41:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.582 10:41:16 -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 ************************************ 00:24:28.582 START TEST spdkcli_nvmf_tcp 00:24:28.582 ************************************ 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:28.582 * Looking for test storage... 00:24:28.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1306063 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1306063 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1306063 ']' 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.582 10:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 [2024-07-15 10:41:16.789972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:28.582 [2024-07-15 10:41:16.790056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306063 ] 00:24:28.582 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.582 [2024-07-15 10:41:16.846307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:28.582 [2024-07-15 10:41:16.952772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.582 [2024-07-15 10:41:16.952777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 10:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:28.582 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:28.582 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:28.582 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:28.582 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:28.582 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:28.582 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:28.582 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:28.582 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:28.582 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:28.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:28.583 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:28.583 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:28.583 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.583 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:28.583 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:28.583 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:28.583 ' 00:24:31.111 [2024-07-15 10:41:19.587638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.483 [2024-07-15 10:41:20.811839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:35.085 [2024-07-15 10:41:23.070914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:36.984 [2024-07-15 10:41:25.016954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:38.357 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:38.357 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:38.357 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:38.357 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:38.357 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:38.357 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:38.357 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:38.357 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:38.357 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:38.357 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:38.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:38.357 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:38.357 10:41:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.615 10:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:38.615 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:38.615 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:38.615 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:38.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:38.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:38.616 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:38.616 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:38.616 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:38.616 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:38.616 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:38.616 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:38.616 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:38.616 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:38.616 ' 00:24:43.876 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:43.876 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:43.876 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:43.876 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:43.876 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:43.876 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:43.876 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:43.876 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:43.876 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:43.876 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:43.876 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:43.876 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:43.876 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:43.876 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1306063 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1306063 ']' 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1306063 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1306063 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1306063' 00:24:43.876 killing process with pid 1306063 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1306063 00:24:43.876 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1306063 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1306063 ']' 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1306063 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1306063 ']' 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1306063 00:24:44.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1306063) - No such process 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1306063 is not found' 00:24:44.134 Process with pid 1306063 is not found 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:44.134 00:24:44.134 real 0m15.966s 00:24:44.134 user 0m33.694s 00:24:44.134 sys 0m0.791s 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.134 10:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.134 ************************************ 00:24:44.134 END TEST spdkcli_nvmf_tcp 00:24:44.134 ************************************ 00:24:44.134 10:41:32 -- common/autotest_common.sh@1142 -- # return 0 00:24:44.134 10:41:32 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:44.134 10:41:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:44.134 10:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.134 10:41:32 -- common/autotest_common.sh@10 -- # set +x 00:24:44.393 ************************************ 00:24:44.393 START TEST nvmf_identify_passthru 00:24:44.393 ************************************ 00:24:44.393 10:41:32 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:44.393 * Looking for test storage... 00:24:44.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:44.393 10:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.393 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.393 10:41:32 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.393 10:41:32 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.393 10:41:32 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.394 10:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.394 10:41:32 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.394 10:41:32 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.394 10:41:32 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:44.394 10:41:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.394 10:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.394 10:41:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:44.394 10:41:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:44.394 10:41:32 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:24:44.394 10:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:46.293 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:46.293 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:46.293 Found net devices under 0000:09:00.0: cvl_0_0 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:46.293 Found net devices under 0000:09:00.1: cvl_0_1 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.293 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:24:46.294 00:24:46.294 --- 10.0.0.2 ping statistics --- 00:24:46.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.294 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:24:46.294 00:24:46.294 --- 10.0.0.1 ping statistics --- 00:24:46.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.294 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.294 10:41:34 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.294 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:46.294 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:46.294 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:46.552 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:24:46.552 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:24:46.552 10:41:34 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:24:46.553 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:24:46.553 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:24:46.553 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:24:46.553 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:46.553 10:41:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:46.553 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.738 10:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:24:50.738 10:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:24:50.738 10:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:50.738 10:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:50.738 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1310558 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.920 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1310558 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1310558 ']' 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.920 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 [2024-07-15 10:41:43.208065] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:54.920 [2024-07-15 10:41:43.208179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.920 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.920 [2024-07-15 10:41:43.274228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.920 [2024-07-15 10:41:43.382490] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.920 [2024-07-15 10:41:43.382543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.920 [2024-07-15 10:41:43.382556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.920 [2024-07-15 10:41:43.382567] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.921 [2024-07-15 10:41:43.382576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.921 [2024-07-15 10:41:43.382652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.921 [2024-07-15 10:41:43.383836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.921 [2024-07-15 10:41:43.383862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.921 [2024-07-15 10:41:43.383865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:24:54.921 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.921 INFO: Log level set to 20 00:24:54.921 INFO: Requests: 00:24:54.921 { 00:24:54.921 "jsonrpc": "2.0", 00:24:54.921 "method": "nvmf_set_config", 00:24:54.921 "id": 1, 00:24:54.921 "params": { 00:24:54.921 "admin_cmd_passthru": { 00:24:54.921 "identify_ctrlr": true 00:24:54.921 } 00:24:54.921 } 00:24:54.921 } 00:24:54.921 00:24:54.921 INFO: response: 00:24:54.921 { 00:24:54.921 "jsonrpc": "2.0", 00:24:54.921 "id": 1, 00:24:54.921 "result": true 00:24:54.921 } 00:24:54.921 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.921 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.921 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.921 INFO: Setting log level to 20 00:24:54.921 INFO: Setting log level to 20 00:24:54.921 INFO: Log level set to 20 00:24:54.921 INFO: Log level set to 20 00:24:54.921 INFO: Requests: 00:24:54.921 { 00:24:54.921 "jsonrpc": "2.0", 00:24:54.921 "method": "framework_start_init", 00:24:54.921 "id": 1 00:24:54.921 } 00:24:54.921 00:24:54.921 INFO: Requests: 00:24:54.921 { 00:24:54.921 "jsonrpc": "2.0", 00:24:54.921 "method": "framework_start_init", 00:24:54.921 "id": 1 00:24:54.921 } 00:24:54.921 00:24:55.179 [2024-07-15 10:41:43.531127] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:55.179 INFO: response: 00:24:55.179 { 00:24:55.179 "jsonrpc": "2.0", 00:24:55.179 "id": 1, 00:24:55.179 "result": true 00:24:55.179 } 00:24:55.179 00:24:55.179 INFO: response: 00:24:55.179 { 00:24:55.179 "jsonrpc": "2.0", 00:24:55.179 "id": 1, 00:24:55.179 "result": true 00:24:55.179 } 00:24:55.179 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.179 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:55.179 INFO: Setting log level to 40 00:24:55.179 INFO: Setting log level to 40 00:24:55.179 INFO: Setting log level to 40 00:24:55.179 [2024-07-15 10:41:43.541200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.179 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:55.179 10:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.179 10:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:58.456 Nvme0n1 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:58.456 [2024-07-15 10:41:46.431730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:58.456 [ 00:24:58.456 { 00:24:58.456 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:58.456 "subtype": "Discovery", 00:24:58.456 "listen_addresses": [], 00:24:58.456 "allow_any_host": true, 00:24:58.456 "hosts": [] 00:24:58.456 }, 00:24:58.456 { 00:24:58.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.456 "subtype": "NVMe", 00:24:58.456 "listen_addresses": [ 00:24:58.456 { 00:24:58.456 "trtype": "TCP", 00:24:58.456 "adrfam": "IPv4", 00:24:58.456 "traddr": "10.0.0.2", 00:24:58.456 "trsvcid": "4420" 00:24:58.456 } 00:24:58.456 ], 00:24:58.456 "allow_any_host": true, 00:24:58.456 "hosts": [], 00:24:58.456 "serial_number": "SPDK00000000000001", 00:24:58.456 "model_number": "SPDK bdev Controller", 00:24:58.456 "max_namespaces": 1, 00:24:58.456 "min_cntlid": 1, 00:24:58.456 "max_cntlid": 65519, 00:24:58.456 "namespaces": [ 00:24:58.456 { 00:24:58.456 "nsid": 1, 00:24:58.456 "bdev_name": "Nvme0n1", 00:24:58.456 "name": "Nvme0n1", 00:24:58.456 "nguid": "061C79C6C83E4B77A2F4AF91D36D58FE", 00:24:58.456 "uuid": "061c79c6-c83e-4b77-a2f4-af91d36d58fe" 00:24:58.456 } 00:24:58.456 ] 00:24:58.456 } 00:24:58.456 ] 00:24:58.456 10:41:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:58.456 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:58.456 10:41:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:58.456 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.713 10:41:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:24:58.713 10:41:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:24:58.713 10:41:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:24:58.713 10:41:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.713 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.713 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:58.713 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.713 10:41:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:58.713 10:41:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.713 rmmod nvme_tcp 00:24:58.713 rmmod nvme_fabrics 00:24:58.713 rmmod nvme_keyring 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1310558 ']' 00:24:58.713 10:41:47 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1310558 00:24:58.713 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1310558 ']' 00:24:58.713 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1310558 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1310558 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1310558' 00:24:58.714 killing process with pid 1310558 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1310558 00:24:58.714 10:41:47 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1310558 00:25:00.609 10:41:48 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:00.609 10:41:48 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:00.609 10:41:48 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:00.609 10:41:48 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.609 10:41:48 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:00.609 10:41:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.609 10:41:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:00.609 10:41:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.512 10:41:50 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.512 00:25:02.512 real 0m18.008s 00:25:02.512 user 0m27.314s 00:25:02.512 sys 0m2.234s 00:25:02.512 10:41:50 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.512 10:41:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:02.512 ************************************ 00:25:02.512 END TEST nvmf_identify_passthru 00:25:02.512 ************************************ 00:25:02.512 10:41:50 -- common/autotest_common.sh@1142 -- # return 0 00:25:02.512 10:41:50 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:02.512 10:41:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:02.512 10:41:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.512 10:41:50 -- common/autotest_common.sh@10 -- # set +x 00:25:02.512 ************************************ 00:25:02.512 START TEST nvmf_dif 00:25:02.512 ************************************ 00:25:02.512 10:41:50 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:02.512 * Looking for test storage... 00:25:02.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:02.512 10:41:50 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.512 10:41:50 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.512 10:41:50 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.512 10:41:50 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.512 10:41:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.512 10:41:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.512 10:41:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.512 10:41:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:02.512 10:41:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.512 10:41:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:02.512 10:41:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:02.512 10:41:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:02.512 10:41:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:02.512 10:41:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.512 10:41:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:02.512 10:41:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.512 10:41:50 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.512 10:41:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:04.414 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:04.414 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:04.414 Found net devices under 0000:09:00.0: cvl_0_0 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:04.414 Found net devices under 0000:09:00.1: cvl_0_1 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:04.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:25:04.414 00:25:04.414 --- 10.0.0.2 ping statistics --- 00:25:04.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.414 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:04.414 00:25:04.414 --- 10.0.0.1 ping statistics --- 00:25:04.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.414 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:04.414 10:41:52 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:05.784 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:05.784 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:05.784 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:05.784 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:05.784 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:05.784 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:05.784 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:05.784 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:05.784 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:05.784 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:05.784 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:05.784 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:05.784 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:05.784 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:05.784 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:05.784 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:05.784 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:05.784 10:41:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.784 10:41:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.784 10:41:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.784 10:41:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.784 10:41:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.784 10:41:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:06.041 10:41:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:06.041 10:41:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:06.041 10:41:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:06.041 10:41:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1313824 00:25:06.041 10:41:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:06.041 10:41:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1313824 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1313824 ']' 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:06.041 10:41:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:06.041 [2024-07-15 10:41:54.386077] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:06.041 [2024-07-15 10:41:54.386177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.041 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.041 [2024-07-15 10:41:54.448771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.041 [2024-07-15 10:41:54.547291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.041 [2024-07-15 10:41:54.547344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.041 [2024-07-15 10:41:54.547369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.041 [2024-07-15 10:41:54.547379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.041 [2024-07-15 10:41:54.547388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.041 [2024-07-15 10:41:54.547413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:06.299 10:41:54 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 10:41:54 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.299 10:41:54 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:06.299 10:41:54 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 [2024-07-15 10:41:54.688596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.299 10:41:54 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 ************************************ 00:25:06.299 START TEST fio_dif_1_default 00:25:06.299 ************************************ 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 bdev_null0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:06.299 [2024-07-15 10:41:54.744903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:06.299 { 00:25:06.299 "params": { 00:25:06.299 "name": "Nvme$subsystem", 00:25:06.299 "trtype": "$TEST_TRANSPORT", 00:25:06.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.299 "adrfam": "ipv4", 00:25:06.299 "trsvcid": "$NVMF_PORT", 00:25:06.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.299 "hdgst": ${hdgst:-false}, 00:25:06.299 "ddgst": ${ddgst:-false} 00:25:06.299 }, 00:25:06.299 "method": "bdev_nvme_attach_controller" 00:25:06.299 } 00:25:06.299 EOF 00:25:06.299 )") 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:06.299 "params": { 00:25:06.299 "name": "Nvme0", 00:25:06.299 "trtype": "tcp", 00:25:06.299 "traddr": "10.0.0.2", 00:25:06.299 "adrfam": "ipv4", 00:25:06.299 "trsvcid": "4420", 00:25:06.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:06.299 "hdgst": false, 00:25:06.299 "ddgst": false 00:25:06.299 }, 00:25:06.299 "method": "bdev_nvme_attach_controller" 00:25:06.299 }' 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:06.299 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:06.300 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:06.300 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:06.300 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:06.300 10:41:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.557 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:06.557 fio-3.35 00:25:06.557 Starting 1 thread 00:25:06.557 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.797 00:25:18.797 filename0: (groupid=0, jobs=1): err= 0: pid=1314055: Mon Jul 15 10:42:05 2024 00:25:18.797 read: IOPS=190, BW=764KiB/s (782kB/s)(7648KiB/10014msec) 00:25:18.797 slat (nsec): min=4677, max=79316, avg=9464.89, stdev=4484.45 00:25:18.797 clat (usec): min=543, max=45071, avg=20918.57, stdev=20468.00 00:25:18.797 lat (usec): min=550, max=45103, avg=20928.03, stdev=20468.38 00:25:18.797 clat percentiles (usec): 00:25:18.797 | 1.00th=[ 570], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 717], 00:25:18.797 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 816], 60.00th=[41157], 00:25:18.797 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:25:18.797 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:25:18.797 | 99.99th=[44827] 00:25:18.797 bw ( KiB/s): min= 672, max= 832, per=99.90%, avg=763.20, stdev=36.37, samples=20 00:25:18.797 iops : min= 168, max= 208, avg=190.80, stdev= 9.09, samples=20 00:25:18.797 lat (usec) : 750=34.57%, 1000=16.06% 00:25:18.797 lat (msec) : 50=49.37% 00:25:18.797 cpu : usr=90.03%, sys=9.67%, ctx=18, majf=0, minf=228 00:25:18.797 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:18.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.797 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.797 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:18.797 00:25:18.797 Run status group 0 (all jobs): 00:25:18.797 READ: bw=764KiB/s (782kB/s), 764KiB/s-764KiB/s (782kB/s-782kB/s), io=7648KiB (7832kB), run=10014-10014msec 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.797 00:25:18.797 real 0m11.254s 00:25:18.797 user 0m10.332s 00:25:18.797 sys 0m1.223s 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:18.797 10:42:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:18.797 ************************************ 00:25:18.797 END TEST fio_dif_1_default 00:25:18.797 ************************************ 00:25:18.797 10:42:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:18.797 10:42:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:18.797 10:42:05 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:18.797 10:42:05 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.797 10:42:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:18.797 ************************************ 00:25:18.797 START TEST fio_dif_1_multi_subsystems 00:25:18.797 ************************************ 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.797 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 bdev_null0 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 [2024-07-15 10:42:06.048701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 bdev_null1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:18.798 { 00:25:18.798 "params": { 00:25:18.798 "name": "Nvme$subsystem", 00:25:18.798 "trtype": "$TEST_TRANSPORT", 00:25:18.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.798 "adrfam": "ipv4", 00:25:18.798 "trsvcid": "$NVMF_PORT", 00:25:18.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.798 "hdgst": ${hdgst:-false}, 00:25:18.798 "ddgst": ${ddgst:-false} 00:25:18.798 }, 00:25:18.798 "method": "bdev_nvme_attach_controller" 00:25:18.798 } 00:25:18.798 EOF 00:25:18.798 )") 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:18.798 { 00:25:18.798 "params": { 00:25:18.798 "name": "Nvme$subsystem", 00:25:18.798 "trtype": "$TEST_TRANSPORT", 00:25:18.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.798 "adrfam": "ipv4", 00:25:18.798 "trsvcid": "$NVMF_PORT", 00:25:18.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.798 "hdgst": ${hdgst:-false}, 00:25:18.798 "ddgst": ${ddgst:-false} 00:25:18.798 }, 00:25:18.798 "method": "bdev_nvme_attach_controller" 00:25:18.798 } 00:25:18.798 EOF 00:25:18.798 )") 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:18.798 "params": { 00:25:18.798 "name": "Nvme0", 00:25:18.798 "trtype": "tcp", 00:25:18.798 "traddr": "10.0.0.2", 00:25:18.798 "adrfam": "ipv4", 00:25:18.798 "trsvcid": "4420", 00:25:18.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:18.798 "hdgst": false, 00:25:18.798 "ddgst": false 00:25:18.798 }, 00:25:18.798 "method": "bdev_nvme_attach_controller" 00:25:18.798 },{ 00:25:18.798 "params": { 00:25:18.798 "name": "Nvme1", 00:25:18.798 "trtype": "tcp", 00:25:18.798 "traddr": "10.0.0.2", 00:25:18.798 "adrfam": "ipv4", 00:25:18.798 "trsvcid": "4420", 00:25:18.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:18.798 "hdgst": false, 00:25:18.798 "ddgst": false 00:25:18.798 }, 00:25:18.798 "method": "bdev_nvme_attach_controller" 00:25:18.798 }' 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:18.798 10:42:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.798 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:18.798 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:18.798 fio-3.35 00:25:18.798 Starting 2 threads 00:25:18.798 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.757 00:25:28.757 filename0: (groupid=0, jobs=1): err= 0: pid=1315574: Mon Jul 15 10:42:17 2024 00:25:28.757 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10029msec) 00:25:28.757 slat (nsec): min=6990, max=91018, avg=8960.83, stdev=3587.88 00:25:28.757 clat (usec): min=529, max=42450, avg=20997.52, stdev=20484.40 00:25:28.757 lat (usec): min=537, max=42461, avg=21006.48, stdev=20483.97 00:25:28.757 clat percentiles (usec): 00:25:28.757 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 00:25:28.757 | 30.00th=[ 635], 40.00th=[ 676], 50.00th=[ 1004], 60.00th=[41157], 00:25:28.757 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:25:28.757 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:28.757 | 99.99th=[42206] 00:25:28.757 bw ( KiB/s): min= 704, max= 800, per=66.07%, avg=761.60, stdev=22.27, samples=20 00:25:28.757 iops : min= 176, max= 200, avg=190.40, stdev= 5.57, samples=20 00:25:28.757 lat (usec) : 750=48.22%, 1000=1.73% 00:25:28.757 lat (msec) : 2=0.16%, 4=0.21%, 50=49.69% 00:25:28.757 cpu : usr=94.24%, sys=5.45%, ctx=15, majf=0, minf=167 00:25:28.757 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:28.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.757 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.757 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:28.757 filename1: (groupid=0, jobs=1): err= 0: pid=1315575: Mon Jul 15 10:42:17 2024 00:25:28.757 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10013msec) 00:25:28.757 slat (nsec): min=6994, max=30218, avg=9266.67, stdev=3164.03 00:25:28.757 clat (usec): min=587, max=43958, avg=40840.42, stdev=2588.08 00:25:28.757 lat (usec): min=594, max=43988, avg=40849.69, stdev=2587.89 00:25:28.757 clat percentiles (usec): 00:25:28.757 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:25:28.757 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:28.757 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:28.757 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:25:28.757 | 99.99th=[43779] 00:25:28.757 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=390.40, stdev=13.13, samples=20 00:25:28.757 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:25:28.757 lat (usec) : 750=0.41% 00:25:28.757 lat (msec) : 50=99.59% 00:25:28.757 cpu : usr=94.10%, sys=5.59%, ctx=20, majf=0, minf=144 00:25:28.757 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:28.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.757 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.757 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:28.757 00:25:28.757 Run status group 0 (all jobs): 00:25:28.757 READ: bw=1152KiB/s (1180kB/s), 391KiB/s-761KiB/s (401kB/s-779kB/s), io=11.3MiB (11.8MB), run=10013-10029msec 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 00:25:29.023 real 0m11.362s 00:25:29.023 user 0m20.220s 00:25:29.023 sys 0m1.399s 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 ************************************ 00:25:29.023 END TEST fio_dif_1_multi_subsystems 00:25:29.023 ************************************ 00:25:29.023 10:42:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:29.023 10:42:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:29.023 10:42:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:29.023 10:42:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 ************************************ 00:25:29.023 START TEST fio_dif_rand_params 00:25:29.023 ************************************ 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 bdev_null0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 [2024-07-15 10:42:17.470450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:29.023 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.024 { 00:25:29.024 "params": { 00:25:29.024 "name": "Nvme$subsystem", 00:25:29.024 "trtype": "$TEST_TRANSPORT", 00:25:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.024 "adrfam": "ipv4", 00:25:29.024 "trsvcid": "$NVMF_PORT", 00:25:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.024 "hdgst": ${hdgst:-false}, 00:25:29.024 "ddgst": ${ddgst:-false} 00:25:29.024 }, 00:25:29.024 "method": "bdev_nvme_attach_controller" 00:25:29.024 } 00:25:29.024 EOF 00:25:29.024 )") 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:29.024 "params": { 00:25:29.024 "name": "Nvme0", 00:25:29.024 "trtype": "tcp", 00:25:29.024 "traddr": "10.0.0.2", 00:25:29.024 "adrfam": "ipv4", 00:25:29.024 "trsvcid": "4420", 00:25:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:29.024 "hdgst": false, 00:25:29.024 "ddgst": false 00:25:29.024 }, 00:25:29.024 "method": "bdev_nvme_attach_controller" 00:25:29.024 }' 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:29.024 10:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.282 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:29.282 ... 00:25:29.282 fio-3.35 00:25:29.282 Starting 3 threads 00:25:29.282 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.848 00:25:35.848 filename0: (groupid=0, jobs=1): err= 0: pid=1317474: Mon Jul 15 10:42:23 2024 00:25:35.848 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5005msec) 00:25:35.848 slat (nsec): min=5946, max=52194, avg=15746.48, stdev=4863.30 00:25:35.848 clat (usec): min=4834, max=54139, avg=14157.29, stdev=3438.02 00:25:35.848 lat (usec): min=4842, max=54159, avg=14173.04, stdev=3438.14 00:25:35.848 clat percentiles (usec): 00:25:35.848 | 1.00th=[ 7111], 5.00th=[10552], 10.00th=[11469], 20.00th=[12256], 00:25:35.848 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14222], 60.00th=[14746], 00:25:35.848 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16581], 95.00th=[16909], 00:25:35.848 | 99.00th=[17695], 99.50th=[45876], 99.90th=[54264], 99.95th=[54264], 00:25:35.848 | 99.99th=[54264] 00:25:35.848 bw ( KiB/s): min=24832, max=29696, per=31.76%, avg=27038.80, stdev=1585.87, samples=10 00:25:35.848 iops : min= 194, max= 232, avg=211.20, stdev=12.41, samples=10 00:25:35.848 lat (msec) : 10=3.02%, 20=96.41%, 50=0.28%, 100=0.28% 00:25:35.848 cpu : usr=94.56%, sys=4.94%, ctx=6, majf=0, minf=100 00:25:35.848 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.848 issued rwts: total=1059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:35.848 filename0: (groupid=0, jobs=1): err= 0: pid=1317475: Mon Jul 15 10:42:23 2024 00:25:35.848 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(145MiB/5005msec) 00:25:35.848 slat (nsec): min=6032, max=84691, avg=16846.12, stdev=5705.50 00:25:35.848 clat (usec): min=6596, max=55051, avg=12919.04, stdev=3711.91 00:25:35.848 lat (usec): min=6609, max=55070, avg=12935.88, stdev=3712.02 00:25:35.848 clat percentiles (usec): 00:25:35.848 | 1.00th=[ 8029], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:25:35.848 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:25:35.848 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14746], 95.00th=[15533], 00:25:35.848 | 99.00th=[17433], 99.50th=[49546], 99.90th=[54264], 99.95th=[55313], 00:25:35.848 | 99.99th=[55313] 00:25:35.848 bw ( KiB/s): min=26112, max=31744, per=34.82%, avg=29644.80, stdev=1631.63, samples=10 00:25:35.848 iops : min= 204, max= 248, avg=231.60, stdev=12.75, samples=10 00:25:35.848 lat (msec) : 10=3.88%, 20=95.34%, 50=0.52%, 100=0.26% 00:25:35.848 cpu : usr=93.35%, sys=6.16%, ctx=8, majf=0, minf=141 00:25:35.848 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.848 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:35.848 filename0: (groupid=0, jobs=1): err= 0: pid=1317476: Mon Jul 15 10:42:23 2024 00:25:35.848 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(142MiB/5047msec) 00:25:35.848 slat (nsec): min=5512, max=91184, avg=25145.90, stdev=10168.98 00:25:35.848 clat (usec): min=6347, max=54664, avg=13233.10, stdev=4627.81 00:25:35.848 lat (usec): min=6354, max=54677, avg=13258.24, stdev=4627.47 00:25:35.848 clat percentiles (usec): 00:25:35.848 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:25:35.848 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:25:35.848 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15533], 95.00th=[16450], 00:25:35.848 | 99.00th=[47449], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:25:35.848 | 99.99th=[54789] 00:25:35.848 bw ( KiB/s): min=21760, max=32768, per=34.16%, avg=29081.60, stdev=3133.49, samples=10 00:25:35.848 iops : min= 170, max= 256, avg=227.20, stdev=24.48, samples=10 00:25:35.848 lat (msec) : 10=3.25%, 20=95.43%, 50=0.44%, 100=0.88% 00:25:35.848 cpu : usr=86.44%, sys=8.88%, ctx=480, majf=0, minf=126 00:25:35.848 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.848 issued rwts: total=1138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:35.848 00:25:35.848 Run status group 0 (all jobs): 00:25:35.848 READ: bw=83.1MiB/s (87.2MB/s), 26.4MiB/s-29.0MiB/s (27.7MB/s-30.4MB/s), io=420MiB (440MB), run=5005-5047msec 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:35.848 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 bdev_null0 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 [2024-07-15 10:42:23.557839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 bdev_null1 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 bdev_null2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:35.849 { 00:25:35.849 "params": { 00:25:35.849 "name": "Nvme$subsystem", 00:25:35.849 "trtype": "$TEST_TRANSPORT", 00:25:35.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.849 "adrfam": "ipv4", 00:25:35.849 "trsvcid": "$NVMF_PORT", 00:25:35.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.849 "hdgst": ${hdgst:-false}, 00:25:35.849 "ddgst": ${ddgst:-false} 00:25:35.849 }, 00:25:35.849 "method": "bdev_nvme_attach_controller" 00:25:35.849 } 00:25:35.849 EOF 00:25:35.849 )") 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:35.849 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:35.849 { 00:25:35.849 "params": { 00:25:35.849 "name": "Nvme$subsystem", 00:25:35.849 "trtype": "$TEST_TRANSPORT", 00:25:35.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.849 "adrfam": "ipv4", 00:25:35.849 "trsvcid": "$NVMF_PORT", 00:25:35.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.849 "hdgst": ${hdgst:-false}, 00:25:35.849 "ddgst": ${ddgst:-false} 00:25:35.850 }, 00:25:35.850 "method": "bdev_nvme_attach_controller" 00:25:35.850 } 00:25:35.850 EOF 00:25:35.850 )") 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:35.850 { 00:25:35.850 "params": { 00:25:35.850 "name": "Nvme$subsystem", 00:25:35.850 "trtype": "$TEST_TRANSPORT", 00:25:35.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.850 "adrfam": "ipv4", 00:25:35.850 "trsvcid": "$NVMF_PORT", 00:25:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.850 "hdgst": ${hdgst:-false}, 00:25:35.850 "ddgst": ${ddgst:-false} 00:25:35.850 }, 00:25:35.850 "method": "bdev_nvme_attach_controller" 00:25:35.850 } 00:25:35.850 EOF 00:25:35.850 )") 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:35.850 "params": { 00:25:35.850 "name": "Nvme0", 00:25:35.850 "trtype": "tcp", 00:25:35.850 "traddr": "10.0.0.2", 00:25:35.850 "adrfam": "ipv4", 00:25:35.850 "trsvcid": "4420", 00:25:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:35.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:35.850 "hdgst": false, 00:25:35.850 "ddgst": false 00:25:35.850 }, 00:25:35.850 "method": "bdev_nvme_attach_controller" 00:25:35.850 },{ 00:25:35.850 "params": { 00:25:35.850 "name": "Nvme1", 00:25:35.850 "trtype": "tcp", 00:25:35.850 "traddr": "10.0.0.2", 00:25:35.850 "adrfam": "ipv4", 00:25:35.850 "trsvcid": "4420", 00:25:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.850 "hdgst": false, 00:25:35.850 "ddgst": false 00:25:35.850 }, 00:25:35.850 "method": "bdev_nvme_attach_controller" 00:25:35.850 },{ 00:25:35.850 "params": { 00:25:35.850 "name": "Nvme2", 00:25:35.850 "trtype": "tcp", 00:25:35.850 "traddr": "10.0.0.2", 00:25:35.850 "adrfam": "ipv4", 00:25:35.850 "trsvcid": "4420", 00:25:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:35.850 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:35.850 "hdgst": false, 00:25:35.850 "ddgst": false 00:25:35.850 }, 00:25:35.850 "method": "bdev_nvme_attach_controller" 00:25:35.850 }' 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:35.850 10:42:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.850 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:35.850 ... 00:25:35.850 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:35.850 ... 00:25:35.850 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:35.850 ... 00:25:35.850 fio-3.35 00:25:35.850 Starting 24 threads 00:25:35.850 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.054 00:25:48.054 filename0: (groupid=0, jobs=1): err= 0: pid=1318342: Mon Jul 15 10:42:34 2024 00:25:48.054 read: IOPS=80, BW=321KiB/s (329kB/s)(3248KiB/10107msec) 00:25:48.054 slat (usec): min=3, max=121, avg=12.23, stdev= 9.67 00:25:48.054 clat (msec): min=101, max=395, avg=199.04, stdev=58.36 00:25:48.054 lat (msec): min=101, max=395, avg=199.05, stdev=58.36 00:25:48.054 clat percentiles (msec): 00:25:48.054 | 1.00th=[ 104], 5.00th=[ 120], 10.00th=[ 132], 20.00th=[ 144], 00:25:48.054 | 30.00th=[ 161], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 213], 00:25:48.054 | 70.00th=[ 234], 80.00th=[ 251], 90.00th=[ 266], 95.00th=[ 296], 00:25:48.054 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:25:48.054 | 99.99th=[ 397] 00:25:48.054 bw ( KiB/s): min= 208, max= 512, per=5.12%, avg=318.40, stdev=85.76, samples=20 00:25:48.054 iops : min= 52, max= 128, avg=79.60, stdev=21.44, samples=20 00:25:48.054 lat (msec) : 250=78.82%, 500=21.18% 00:25:48.054 cpu : usr=97.87%, sys=1.52%, ctx=59, majf=0, minf=50 00:25:48.054 IO depths : 1=0.2%, 2=1.5%, 4=9.5%, 8=76.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:48.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.054 complete : 0=0.0%, 4=89.6%, 8=5.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.054 issued rwts: total=812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.054 filename0: (groupid=0, jobs=1): err= 0: pid=1318343: Mon Jul 15 10:42:34 2024 00:25:48.054 read: IOPS=64, BW=258KiB/s (265kB/s)(2608KiB/10096msec) 00:25:48.054 slat (usec): min=7, max=114, avg=41.64, stdev=31.54 00:25:48.054 clat (msec): min=117, max=401, avg=247.11, stdev=45.16 00:25:48.054 lat (msec): min=117, max=401, avg=247.15, stdev=45.16 00:25:48.054 clat percentiles (msec): 00:25:48.054 | 1.00th=[ 118], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 211], 00:25:48.054 | 30.00th=[ 224], 40.00th=[ 243], 50.00th=[ 257], 60.00th=[ 264], 00:25:48.054 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 305], 95.00th=[ 305], 00:25:48.054 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 401], 99.95th=[ 401], 00:25:48.054 | 99.99th=[ 401] 00:25:48.054 bw ( KiB/s): min= 128, max= 384, per=4.09%, avg=254.40, stdev=46.69, samples=20 00:25:48.054 iops : min= 32, max= 96, avg=63.60, stdev=11.67, samples=20 00:25:48.054 lat (msec) : 250=44.79%, 500=55.21% 00:25:48.054 cpu : usr=98.31%, sys=1.22%, ctx=29, majf=0, minf=33 00:25:48.054 IO depths : 1=2.1%, 2=5.8%, 4=17.2%, 8=64.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:25:48.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.054 complete : 0=0.0%, 4=91.8%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename0: (groupid=0, jobs=1): err= 0: pid=1318344: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=75, BW=304KiB/s (311kB/s)(3072KiB/10109msec) 00:25:48.055 slat (usec): min=4, max=112, avg=30.75, stdev=25.89 00:25:48.055 clat (msec): min=4, max=326, avg=208.83, stdev=67.48 00:25:48.055 lat (msec): min=4, max=326, avg=208.86, stdev=67.48 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 5], 5.00th=[ 43], 10.00th=[ 111], 20.00th=[ 174], 00:25:48.055 | 30.00th=[ 201], 40.00th=[ 213], 50.00th=[ 230], 60.00th=[ 241], 00:25:48.055 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 275], 00:25:48.055 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:25:48.055 | 99.99th=[ 326] 00:25:48.055 bw ( KiB/s): min= 128, max= 768, per=4.83%, avg=300.80, stdev=132.41, samples=20 00:25:48.055 iops : min= 32, max= 192, avg=75.20, stdev=33.10, samples=20 00:25:48.055 lat (msec) : 10=2.08%, 50=5.73%, 100=0.52%, 250=55.99%, 500=35.68% 00:25:48.055 cpu : usr=98.18%, sys=1.34%, ctx=31, majf=0, minf=42 00:25:48.055 IO depths : 1=3.0%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename0: (groupid=0, jobs=1): err= 0: pid=1318345: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=60, BW=244KiB/s (249kB/s)(2456KiB/10081msec) 00:25:48.055 slat (nsec): min=8066, max=99602, avg=36095.93, stdev=23956.57 00:25:48.055 clat (msec): min=102, max=510, avg=262.19, stdev=62.42 00:25:48.055 lat (msec): min=102, max=510, avg=262.22, stdev=62.41 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 104], 5.00th=[ 161], 10.00th=[ 192], 20.00th=[ 205], 00:25:48.055 | 30.00th=[ 224], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 271], 00:25:48.055 | 70.00th=[ 288], 80.00th=[ 309], 90.00th=[ 351], 95.00th=[ 376], 00:25:48.055 | 99.00th=[ 401], 99.50th=[ 443], 99.90th=[ 510], 99.95th=[ 510], 00:25:48.055 | 99.99th=[ 510] 00:25:48.055 bw ( KiB/s): min= 128, max= 384, per=3.85%, avg=239.20, stdev=58.15, samples=20 00:25:48.055 iops : min= 32, max= 96, avg=59.80, stdev=14.54, samples=20 00:25:48.055 lat (msec) : 250=39.09%, 500=60.59%, 750=0.33% 00:25:48.055 cpu : usr=98.21%, sys=1.27%, ctx=32, majf=0, minf=42 00:25:48.055 IO depths : 1=2.3%, 2=7.0%, 4=20.4%, 8=60.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename0: (groupid=0, jobs=1): err= 0: pid=1318346: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=67, BW=268KiB/s (275kB/s)(2688KiB/10019msec) 00:25:48.055 slat (usec): min=4, max=102, avg=28.96, stdev=22.78 00:25:48.055 clat (msec): min=146, max=348, avg=238.27, stdev=33.85 00:25:48.055 lat (msec): min=146, max=348, avg=238.30, stdev=33.84 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 211], 00:25:48.055 | 30.00th=[ 218], 40.00th=[ 232], 50.00th=[ 241], 60.00th=[ 257], 00:25:48.055 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 296], 00:25:48.055 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 351], 99.95th=[ 351], 00:25:48.055 | 99.99th=[ 351] 00:25:48.055 bw ( KiB/s): min= 128, max= 384, per=4.22%, avg=262.40, stdev=63.87, samples=20 00:25:48.055 iops : min= 32, max= 96, avg=65.60, stdev=15.97, samples=20 00:25:48.055 lat (msec) : 250=54.76%, 500=45.24% 00:25:48.055 cpu : usr=98.20%, sys=1.29%, ctx=34, majf=0, minf=42 00:25:48.055 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename0: (groupid=0, jobs=1): err= 0: pid=1318347: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10083msec) 00:25:48.055 slat (usec): min=7, max=111, avg=51.26, stdev=30.10 00:25:48.055 clat (msec): min=132, max=483, avg=263.02, stdev=54.67 00:25:48.055 lat (msec): min=132, max=483, avg=263.07, stdev=54.68 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 138], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 213], 00:25:48.055 | 30.00th=[ 239], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 268], 00:25:48.055 | 70.00th=[ 275], 80.00th=[ 305], 90.00th=[ 342], 95.00th=[ 359], 00:25:48.055 | 99.00th=[ 401], 99.50th=[ 456], 99.90th=[ 485], 99.95th=[ 485], 00:25:48.055 | 99.99th=[ 485] 00:25:48.055 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=242.40, stdev=57.40, samples=20 00:25:48.055 iops : min= 32, max= 96, avg=60.60, stdev=14.35, samples=20 00:25:48.055 lat (msec) : 250=35.20%, 500=64.80% 00:25:48.055 cpu : usr=98.16%, sys=1.41%, ctx=31, majf=0, minf=49 00:25:48.055 IO depths : 1=2.5%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename0: (groupid=0, jobs=1): err= 0: pid=1318348: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10105msec) 00:25:48.055 slat (usec): min=3, max=131, avg=29.05, stdev=25.00 00:25:48.055 clat (msec): min=114, max=346, avg=240.06, stdev=39.84 00:25:48.055 lat (msec): min=114, max=346, avg=240.09, stdev=39.83 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 115], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 205], 00:25:48.055 | 30.00th=[ 218], 40.00th=[ 224], 50.00th=[ 247], 60.00th=[ 257], 00:25:48.055 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 309], 00:25:48.055 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:25:48.055 | 99.99th=[ 347] 00:25:48.055 bw ( KiB/s): min= 128, max= 384, per=4.22%, avg=262.40, stdev=60.63, samples=20 00:25:48.055 iops : min= 32, max= 96, avg=65.60, stdev=15.16, samples=20 00:25:48.055 lat (msec) : 250=51.19%, 500=48.81% 00:25:48.055 cpu : usr=97.99%, sys=1.43%, ctx=50, majf=0, minf=33 00:25:48.055 IO depths : 1=4.2%, 2=10.0%, 4=23.7%, 8=53.9%, 16=8.3%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename0: (groupid=0, jobs=1): err= 0: pid=1318349: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=68, BW=272KiB/s (279kB/s)(2752KiB/10106msec) 00:25:48.055 slat (nsec): min=8035, max=72192, avg=18846.78, stdev=11330.57 00:25:48.055 clat (msec): min=164, max=296, avg=234.79, stdev=33.18 00:25:48.055 lat (msec): min=164, max=296, avg=234.81, stdev=33.18 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 186], 20.00th=[ 203], 00:25:48.055 | 30.00th=[ 213], 40.00th=[ 230], 50.00th=[ 245], 60.00th=[ 255], 00:25:48.055 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 279], 00:25:48.055 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:25:48.055 | 99.99th=[ 296] 00:25:48.055 bw ( KiB/s): min= 128, max= 384, per=4.32%, avg=268.80, stdev=57.24, samples=20 00:25:48.055 iops : min= 32, max= 96, avg=67.20, stdev=14.31, samples=20 00:25:48.055 lat (msec) : 250=53.49%, 500=46.51% 00:25:48.055 cpu : usr=97.98%, sys=1.64%, ctx=16, majf=0, minf=40 00:25:48.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename1: (groupid=0, jobs=1): err= 0: pid=1318350: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10109msec) 00:25:48.055 slat (usec): min=5, max=130, avg=51.51, stdev=30.45 00:25:48.055 clat (msec): min=30, max=515, avg=245.22, stdev=81.27 00:25:48.055 lat (msec): min=30, max=515, avg=245.27, stdev=81.28 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 32], 5.00th=[ 75], 10.00th=[ 146], 20.00th=[ 192], 00:25:48.055 | 30.00th=[ 209], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 268], 00:25:48.055 | 70.00th=[ 275], 80.00th=[ 296], 90.00th=[ 351], 95.00th=[ 359], 00:25:48.055 | 99.00th=[ 376], 99.50th=[ 439], 99.90th=[ 514], 99.95th=[ 514], 00:25:48.055 | 99.99th=[ 514] 00:25:48.055 bw ( KiB/s): min= 128, max= 512, per=4.11%, avg=256.00, stdev=100.79, samples=20 00:25:48.055 iops : min= 32, max= 128, avg=64.00, stdev=25.20, samples=20 00:25:48.055 lat (msec) : 50=4.88%, 100=2.13%, 250=33.84%, 500=58.84%, 750=0.30% 00:25:48.055 cpu : usr=97.85%, sys=1.49%, ctx=78, majf=0, minf=58 00:25:48.055 IO depths : 1=3.8%, 2=9.9%, 4=24.5%, 8=53.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename1: (groupid=0, jobs=1): err= 0: pid=1318351: Mon Jul 15 10:42:34 2024 00:25:48.055 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10108msec) 00:25:48.055 slat (usec): min=5, max=111, avg=46.80, stdev=30.54 00:25:48.055 clat (msec): min=110, max=457, avg=251.35, stdev=61.69 00:25:48.055 lat (msec): min=110, max=457, avg=251.40, stdev=61.70 00:25:48.055 clat percentiles (msec): 00:25:48.055 | 1.00th=[ 111], 5.00th=[ 118], 10.00th=[ 186], 20.00th=[ 205], 00:25:48.055 | 30.00th=[ 222], 40.00th=[ 245], 50.00th=[ 259], 60.00th=[ 266], 00:25:48.055 | 70.00th=[ 268], 80.00th=[ 284], 90.00th=[ 347], 95.00th=[ 351], 00:25:48.055 | 99.00th=[ 401], 99.50th=[ 443], 99.90th=[ 460], 99.95th=[ 460], 00:25:48.055 | 99.99th=[ 460] 00:25:48.055 bw ( KiB/s): min= 128, max= 384, per=4.06%, avg=252.00, stdev=70.96, samples=20 00:25:48.055 iops : min= 32, max= 96, avg=63.00, stdev=17.74, samples=20 00:25:48.055 lat (msec) : 250=42.81%, 500=57.19% 00:25:48.055 cpu : usr=98.05%, sys=1.51%, ctx=17, majf=0, minf=38 00:25:48.055 IO depths : 1=2.2%, 2=7.2%, 4=21.3%, 8=59.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:25:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.055 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.055 filename1: (groupid=0, jobs=1): err= 0: pid=1318352: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10082msec) 00:25:48.056 slat (nsec): min=7870, max=97909, avg=33677.50, stdev=21753.23 00:25:48.056 clat (msec): min=85, max=523, avg=287.76, stdev=84.10 00:25:48.056 lat (msec): min=85, max=523, avg=287.80, stdev=84.08 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 148], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 203], 00:25:48.056 | 30.00th=[ 222], 40.00th=[ 268], 50.00th=[ 284], 60.00th=[ 305], 00:25:48.056 | 70.00th=[ 342], 80.00th=[ 372], 90.00th=[ 401], 95.00th=[ 422], 00:25:48.056 | 99.00th=[ 518], 99.50th=[ 523], 99.90th=[ 523], 99.95th=[ 523], 00:25:48.056 | 99.99th=[ 523] 00:25:48.056 bw ( KiB/s): min= 128, max= 384, per=3.50%, avg=217.60, stdev=84.09, samples=20 00:25:48.056 iops : min= 32, max= 96, avg=54.40, stdev=21.02, samples=20 00:25:48.056 lat (msec) : 100=0.36%, 250=37.86%, 500=60.71%, 750=1.07% 00:25:48.056 cpu : usr=98.04%, sys=1.37%, ctx=108, majf=0, minf=40 00:25:48.056 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename1: (groupid=0, jobs=1): err= 0: pid=1318353: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=67, BW=268KiB/s (275kB/s)(2688KiB/10016msec) 00:25:48.056 slat (nsec): min=8053, max=78568, avg=23514.45, stdev=15556.35 00:25:48.056 clat (msec): min=173, max=296, avg=238.28, stdev=30.98 00:25:48.056 lat (msec): min=173, max=296, avg=238.30, stdev=30.98 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 207], 00:25:48.056 | 30.00th=[ 218], 40.00th=[ 232], 50.00th=[ 239], 60.00th=[ 255], 00:25:48.056 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 279], 00:25:48.056 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:25:48.056 | 99.99th=[ 296] 00:25:48.056 bw ( KiB/s): min= 128, max= 368, per=4.22%, avg=262.40, stdev=59.05, samples=20 00:25:48.056 iops : min= 32, max= 92, avg=65.60, stdev=14.76, samples=20 00:25:48.056 lat (msec) : 250=55.65%, 500=44.35% 00:25:48.056 cpu : usr=98.12%, sys=1.48%, ctx=15, majf=0, minf=37 00:25:48.056 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename1: (groupid=0, jobs=1): err= 0: pid=1318354: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=68, BW=273KiB/s (280kB/s)(2760KiB/10106msec) 00:25:48.056 slat (usec): min=7, max=103, avg=26.03, stdev=24.05 00:25:48.056 clat (msec): min=126, max=406, avg=233.80, stdev=42.07 00:25:48.056 lat (msec): min=126, max=406, avg=233.83, stdev=42.06 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 127], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 203], 00:25:48.056 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 245], 00:25:48.056 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:25:48.056 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:25:48.056 | 99.99th=[ 405] 00:25:48.056 bw ( KiB/s): min= 176, max= 384, per=4.33%, avg=269.60, stdev=48.77, samples=20 00:25:48.056 iops : min= 44, max= 96, avg=67.40, stdev=12.19, samples=20 00:25:48.056 lat (msec) : 250=62.03%, 500=37.97% 00:25:48.056 cpu : usr=98.06%, sys=1.43%, ctx=33, majf=0, minf=42 00:25:48.056 IO depths : 1=3.2%, 2=6.7%, 4=16.5%, 8=64.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=91.5%, 8=2.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename1: (groupid=0, jobs=1): err= 0: pid=1318355: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=61, BW=246KiB/s (252kB/s)(2480KiB/10090msec) 00:25:48.056 slat (usec): min=7, max=119, avg=51.92, stdev=32.40 00:25:48.056 clat (msec): min=111, max=461, avg=259.58, stdev=56.08 00:25:48.056 lat (msec): min=111, max=461, avg=259.63, stdev=56.09 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 134], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 213], 00:25:48.056 | 30.00th=[ 224], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 268], 00:25:48.056 | 70.00th=[ 275], 80.00th=[ 305], 90.00th=[ 342], 95.00th=[ 359], 00:25:48.056 | 99.00th=[ 418], 99.50th=[ 447], 99.90th=[ 460], 99.95th=[ 460], 00:25:48.056 | 99.99th=[ 460] 00:25:48.056 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=241.60, stdev=53.92, samples=20 00:25:48.056 iops : min= 32, max= 96, avg=60.40, stdev=13.48, samples=20 00:25:48.056 lat (msec) : 250=37.10%, 500=62.90% 00:25:48.056 cpu : usr=97.95%, sys=1.51%, ctx=39, majf=0, minf=51 00:25:48.056 IO depths : 1=2.4%, 2=6.9%, 4=19.7%, 8=60.8%, 16=10.2%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename1: (groupid=0, jobs=1): err= 0: pid=1318356: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10086msec) 00:25:48.056 slat (nsec): min=8073, max=89844, avg=29912.08, stdev=20627.85 00:25:48.056 clat (msec): min=96, max=381, avg=258.30, stdev=51.37 00:25:48.056 lat (msec): min=96, max=381, avg=258.33, stdev=51.36 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 161], 5.00th=[ 190], 10.00th=[ 197], 20.00th=[ 207], 00:25:48.056 | 30.00th=[ 222], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 268], 00:25:48.056 | 70.00th=[ 275], 80.00th=[ 305], 90.00th=[ 342], 95.00th=[ 355], 00:25:48.056 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:25:48.056 | 99.99th=[ 380] 00:25:48.056 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=243.20, stdev=69.37, samples=20 00:25:48.056 iops : min= 32, max= 96, avg=60.80, stdev=17.34, samples=20 00:25:48.056 lat (msec) : 100=0.32%, 250=38.46%, 500=61.22% 00:25:48.056 cpu : usr=98.20%, sys=1.39%, ctx=30, majf=0, minf=36 00:25:48.056 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename1: (groupid=0, jobs=1): err= 0: pid=1318357: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=55, BW=221KiB/s (227kB/s)(2232KiB/10081msec) 00:25:48.056 slat (usec): min=8, max=104, avg=65.34, stdev=18.19 00:25:48.056 clat (msec): min=85, max=532, avg=288.46, stdev=87.24 00:25:48.056 lat (msec): min=85, max=532, avg=288.53, stdev=87.25 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 86], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 205], 00:25:48.056 | 30.00th=[ 224], 40.00th=[ 259], 50.00th=[ 279], 60.00th=[ 305], 00:25:48.056 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 414], 95.00th=[ 422], 00:25:48.056 | 99.00th=[ 523], 99.50th=[ 531], 99.90th=[ 535], 99.95th=[ 535], 00:25:48.056 | 99.99th=[ 535] 00:25:48.056 bw ( KiB/s): min= 128, max= 384, per=3.48%, avg=216.80, stdev=68.76, samples=20 00:25:48.056 iops : min= 32, max= 96, avg=54.20, stdev=17.19, samples=20 00:25:48.056 lat (msec) : 100=2.51%, 250=32.62%, 500=63.44%, 750=1.43% 00:25:48.056 cpu : usr=98.08%, sys=1.39%, ctx=17, majf=0, minf=33 00:25:48.056 IO depths : 1=3.4%, 2=9.7%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename2: (groupid=0, jobs=1): err= 0: pid=1318358: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10088msec) 00:25:48.056 slat (nsec): min=7922, max=99428, avg=45254.65, stdev=29767.77 00:25:48.056 clat (msec): min=185, max=408, avg=251.79, stdev=44.10 00:25:48.056 lat (msec): min=186, max=408, avg=251.84, stdev=44.11 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 197], 20.00th=[ 218], 00:25:48.056 | 30.00th=[ 224], 40.00th=[ 239], 50.00th=[ 255], 60.00th=[ 259], 00:25:48.056 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 305], 95.00th=[ 351], 00:25:48.056 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:25:48.056 | 99.99th=[ 409] 00:25:48.056 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=249.60, stdev=65.33, samples=20 00:25:48.056 iops : min= 32, max= 96, avg=62.40, stdev=16.33, samples=20 00:25:48.056 lat (msec) : 250=46.72%, 500=53.28% 00:25:48.056 cpu : usr=97.94%, sys=1.48%, ctx=77, majf=0, minf=34 00:25:48.056 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename2: (groupid=0, jobs=1): err= 0: pid=1318359: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=65, BW=260KiB/s (266kB/s)(2624KiB/10087msec) 00:25:48.056 slat (usec): min=7, max=112, avg=34.63, stdev=28.42 00:25:48.056 clat (msec): min=134, max=414, avg=243.96, stdev=44.46 00:25:48.056 lat (msec): min=134, max=414, avg=243.99, stdev=44.45 00:25:48.056 clat percentiles (msec): 00:25:48.056 | 1.00th=[ 148], 5.00th=[ 178], 10.00th=[ 192], 20.00th=[ 203], 00:25:48.056 | 30.00th=[ 220], 40.00th=[ 230], 50.00th=[ 251], 60.00th=[ 259], 00:25:48.056 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 347], 00:25:48.056 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 414], 99.95th=[ 414], 00:25:48.056 | 99.99th=[ 414] 00:25:48.056 bw ( KiB/s): min= 128, max= 384, per=4.21%, avg=261.60, stdev=48.77, samples=20 00:25:48.056 iops : min= 32, max= 96, avg=65.40, stdev=12.19, samples=20 00:25:48.056 lat (msec) : 250=48.48%, 500=51.52% 00:25:48.056 cpu : usr=97.86%, sys=1.55%, ctx=71, majf=0, minf=44 00:25:48.056 IO depths : 1=1.8%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:25:48.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.056 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.056 filename2: (groupid=0, jobs=1): err= 0: pid=1318360: Mon Jul 15 10:42:34 2024 00:25:48.056 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10081msec) 00:25:48.056 slat (nsec): min=8090, max=93939, avg=26431.48, stdev=23229.01 00:25:48.056 clat (msec): min=85, max=531, avg=287.78, stdev=83.35 00:25:48.057 lat (msec): min=85, max=531, avg=287.80, stdev=83.33 00:25:48.057 clat percentiles (msec): 00:25:48.057 | 1.00th=[ 86], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 205], 00:25:48.057 | 30.00th=[ 224], 40.00th=[ 259], 50.00th=[ 279], 60.00th=[ 305], 00:25:48.057 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 397], 95.00th=[ 418], 00:25:48.057 | 99.00th=[ 485], 99.50th=[ 502], 99.90th=[ 531], 99.95th=[ 531], 00:25:48.057 | 99.99th=[ 531] 00:25:48.057 bw ( KiB/s): min= 128, max= 384, per=3.50%, avg=217.60, stdev=71.82, samples=20 00:25:48.057 iops : min= 32, max= 96, avg=54.40, stdev=17.95, samples=20 00:25:48.057 lat (msec) : 100=2.86%, 250=32.14%, 500=64.64%, 750=0.36% 00:25:48.057 cpu : usr=97.86%, sys=1.73%, ctx=21, majf=0, minf=47 00:25:48.057 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:25:48.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.057 filename2: (groupid=0, jobs=1): err= 0: pid=1318361: Mon Jul 15 10:42:34 2024 00:25:48.057 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10102msec) 00:25:48.057 slat (usec): min=3, max=108, avg=63.98, stdev=22.54 00:25:48.057 clat (msec): min=177, max=521, avg=286.00, stdev=73.92 00:25:48.057 lat (msec): min=177, max=521, avg=286.06, stdev=73.93 00:25:48.057 clat percentiles (msec): 00:25:48.057 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 213], 00:25:48.057 | 30.00th=[ 224], 40.00th=[ 262], 50.00th=[ 275], 60.00th=[ 300], 00:25:48.057 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 380], 95.00th=[ 422], 00:25:48.057 | 99.00th=[ 426], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:25:48.057 | 99.99th=[ 523] 00:25:48.057 bw ( KiB/s): min= 128, max= 384, per=3.50%, avg=217.60, stdev=73.12, samples=20 00:25:48.057 iops : min= 32, max= 96, avg=54.40, stdev=18.28, samples=20 00:25:48.057 lat (msec) : 250=38.21%, 500=61.43%, 750=0.36% 00:25:48.057 cpu : usr=97.87%, sys=1.54%, ctx=94, majf=0, minf=39 00:25:48.057 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:25:48.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.057 filename2: (groupid=0, jobs=1): err= 0: pid=1318362: Mon Jul 15 10:42:34 2024 00:25:48.057 read: IOPS=73, BW=293KiB/s (300kB/s)(2960KiB/10108msec) 00:25:48.057 slat (nsec): min=7835, max=71566, avg=21309.53, stdev=6669.32 00:25:48.057 clat (msec): min=31, max=405, avg=217.72, stdev=68.08 00:25:48.057 lat (msec): min=31, max=405, avg=217.74, stdev=68.08 00:25:48.057 clat percentiles (msec): 00:25:48.057 | 1.00th=[ 32], 5.00th=[ 75], 10.00th=[ 136], 20.00th=[ 176], 00:25:48.057 | 30.00th=[ 197], 40.00th=[ 218], 50.00th=[ 228], 60.00th=[ 245], 00:25:48.057 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 313], 00:25:48.057 | 99.00th=[ 388], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:25:48.057 | 99.99th=[ 405] 00:25:48.057 bw ( KiB/s): min= 176, max= 513, per=4.66%, avg=289.65, stdev=92.25, samples=20 00:25:48.057 iops : min= 44, max= 128, avg=72.40, stdev=23.03, samples=20 00:25:48.057 lat (msec) : 50=4.32%, 100=2.16%, 250=56.49%, 500=37.03% 00:25:48.057 cpu : usr=97.38%, sys=1.77%, ctx=100, majf=0, minf=52 00:25:48.057 IO depths : 1=2.4%, 2=5.9%, 4=16.5%, 8=64.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:25:48.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 issued rwts: total=740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.057 filename2: (groupid=0, jobs=1): err= 0: pid=1318363: Mon Jul 15 10:42:34 2024 00:25:48.057 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10083msec) 00:25:48.057 slat (nsec): min=8013, max=66440, avg=22466.21, stdev=11999.91 00:25:48.057 clat (msec): min=139, max=538, avg=287.82, stdev=78.94 00:25:48.057 lat (msec): min=139, max=538, avg=287.84, stdev=78.93 00:25:48.057 clat percentiles (msec): 00:25:48.057 | 1.00th=[ 148], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 205], 00:25:48.057 | 30.00th=[ 224], 40.00th=[ 268], 50.00th=[ 284], 60.00th=[ 305], 00:25:48.057 | 70.00th=[ 342], 80.00th=[ 359], 90.00th=[ 376], 95.00th=[ 422], 00:25:48.057 | 99.00th=[ 498], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:25:48.057 | 99.99th=[ 542] 00:25:48.057 bw ( KiB/s): min= 128, max= 384, per=3.50%, avg=217.60, stdev=69.34, samples=20 00:25:48.057 iops : min= 32, max= 96, avg=54.40, stdev=17.33, samples=20 00:25:48.057 lat (msec) : 250=35.00%, 500=64.29%, 750=0.71% 00:25:48.057 cpu : usr=98.34%, sys=1.24%, ctx=28, majf=0, minf=37 00:25:48.057 IO depths : 1=3.2%, 2=9.3%, 4=24.5%, 8=53.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:25:48.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.057 filename2: (groupid=0, jobs=1): err= 0: pid=1318364: Mon Jul 15 10:42:34 2024 00:25:48.057 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10096msec) 00:25:48.057 slat (usec): min=4, max=104, avg=28.90, stdev=22.33 00:25:48.057 clat (msec): min=110, max=325, avg=238.40, stdev=36.34 00:25:48.057 lat (msec): min=110, max=325, avg=238.43, stdev=36.33 00:25:48.057 clat percentiles (msec): 00:25:48.057 | 1.00th=[ 146], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 205], 00:25:48.057 | 30.00th=[ 218], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 257], 00:25:48.057 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 296], 00:25:48.057 | 99.00th=[ 305], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:25:48.057 | 99.99th=[ 326] 00:25:48.057 bw ( KiB/s): min= 144, max= 384, per=4.22%, avg=262.40, stdev=59.05, samples=20 00:25:48.057 iops : min= 36, max= 96, avg=65.60, stdev=14.76, samples=20 00:25:48.057 lat (msec) : 250=53.57%, 500=46.43% 00:25:48.057 cpu : usr=98.33%, sys=1.21%, ctx=20, majf=0, minf=36 00:25:48.057 IO depths : 1=1.6%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:48.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.057 filename2: (groupid=0, jobs=1): err= 0: pid=1318365: Mon Jul 15 10:42:34 2024 00:25:48.057 read: IOPS=78, BW=313KiB/s (320kB/s)(3160KiB/10106msec) 00:25:48.057 slat (nsec): min=7906, max=85843, avg=15793.08, stdev=13480.72 00:25:48.057 clat (msec): min=111, max=294, avg=204.24, stdev=44.67 00:25:48.057 lat (msec): min=111, max=294, avg=204.26, stdev=44.67 00:25:48.057 clat percentiles (msec): 00:25:48.057 | 1.00th=[ 112], 5.00th=[ 136], 10.00th=[ 146], 20.00th=[ 159], 00:25:48.057 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 205], 60.00th=[ 218], 00:25:48.057 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 264], 95.00th=[ 268], 00:25:48.057 | 99.00th=[ 271], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:25:48.057 | 99.99th=[ 296] 00:25:48.057 bw ( KiB/s): min= 208, max= 512, per=4.98%, avg=309.60, stdev=69.70, samples=20 00:25:48.057 iops : min= 52, max= 128, avg=77.40, stdev=17.42, samples=20 00:25:48.057 lat (msec) : 250=78.48%, 500=21.52% 00:25:48.057 cpu : usr=97.96%, sys=1.57%, ctx=25, majf=0, minf=56 00:25:48.057 IO depths : 1=0.3%, 2=2.7%, 4=13.4%, 8=71.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:48.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 complete : 0=0.0%, 4=90.9%, 8=3.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.057 issued rwts: total=790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.057 00:25:48.057 Run status group 0 (all jobs): 00:25:48.057 READ: bw=6206KiB/s (6355kB/s), 221KiB/s-321KiB/s (227kB/s-329kB/s), io=61.3MiB (64.2MB), run=10016-10109msec 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:48.057 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 bdev_null0 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 [2024-07-15 10:42:35.333919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 bdev_null1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:48.058 { 00:25:48.058 "params": { 00:25:48.058 "name": "Nvme$subsystem", 00:25:48.058 "trtype": "$TEST_TRANSPORT", 00:25:48.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.058 "adrfam": "ipv4", 00:25:48.058 "trsvcid": "$NVMF_PORT", 00:25:48.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.058 "hdgst": ${hdgst:-false}, 00:25:48.058 "ddgst": ${ddgst:-false} 00:25:48.058 }, 00:25:48.058 "method": "bdev_nvme_attach_controller" 00:25:48.058 } 00:25:48.058 EOF 00:25:48.058 )") 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:48.058 { 00:25:48.058 "params": { 00:25:48.058 "name": "Nvme$subsystem", 00:25:48.058 "trtype": "$TEST_TRANSPORT", 00:25:48.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.058 "adrfam": "ipv4", 00:25:48.058 "trsvcid": "$NVMF_PORT", 00:25:48.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.058 "hdgst": ${hdgst:-false}, 00:25:48.058 "ddgst": ${ddgst:-false} 00:25:48.058 }, 00:25:48.058 "method": "bdev_nvme_attach_controller" 00:25:48.058 } 00:25:48.058 EOF 00:25:48.058 )") 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:48.058 "params": { 00:25:48.058 "name": "Nvme0", 00:25:48.058 "trtype": "tcp", 00:25:48.058 "traddr": "10.0.0.2", 00:25:48.058 "adrfam": "ipv4", 00:25:48.058 "trsvcid": "4420", 00:25:48.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.058 "hdgst": false, 00:25:48.058 "ddgst": false 00:25:48.058 }, 00:25:48.058 "method": "bdev_nvme_attach_controller" 00:25:48.058 },{ 00:25:48.058 "params": { 00:25:48.058 "name": "Nvme1", 00:25:48.058 "trtype": "tcp", 00:25:48.058 "traddr": "10.0.0.2", 00:25:48.058 "adrfam": "ipv4", 00:25:48.058 "trsvcid": "4420", 00:25:48.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.058 "hdgst": false, 00:25:48.058 "ddgst": false 00:25:48.058 }, 00:25:48.058 "method": "bdev_nvme_attach_controller" 00:25:48.058 }' 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:48.058 10:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.058 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:48.058 ... 00:25:48.058 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:48.058 ... 00:25:48.058 fio-3.35 00:25:48.059 Starting 4 threads 00:25:48.059 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.323 00:25:53.323 filename0: (groupid=0, jobs=1): err= 0: pid=1319744: Mon Jul 15 10:42:41 2024 00:25:53.323 read: IOPS=1874, BW=14.6MiB/s (15.4MB/s)(73.2MiB/5001msec) 00:25:53.323 slat (usec): min=4, max=219, avg=19.66, stdev=12.09 00:25:53.323 clat (usec): min=1122, max=7681, avg=4200.27, stdev=515.69 00:25:53.323 lat (usec): min=1139, max=7695, avg=4219.93, stdev=515.08 00:25:53.323 clat percentiles (usec): 00:25:53.323 | 1.00th=[ 2835], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 3982], 00:25:53.323 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:25:53.324 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5080], 00:25:53.324 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 7439], 00:25:53.324 | 99.99th=[ 7701] 00:25:53.324 bw ( KiB/s): min=14701, max=15216, per=24.79%, avg=14970.33, stdev=182.45, samples=9 00:25:53.324 iops : min= 1837, max= 1902, avg=1871.22, stdev=22.92, samples=9 00:25:53.324 lat (msec) : 2=0.36%, 4=21.15%, 10=78.48% 00:25:53.324 cpu : usr=95.40%, sys=4.14%, ctx=9, majf=0, minf=0 00:25:53.324 IO depths : 1=0.3%, 2=15.7%, 4=57.4%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 issued rwts: total=9374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.324 filename0: (groupid=0, jobs=1): err= 0: pid=1319745: Mon Jul 15 10:42:41 2024 00:25:53.324 read: IOPS=1876, BW=14.7MiB/s (15.4MB/s)(73.3MiB/5002msec) 00:25:53.324 slat (nsec): min=4574, max=83212, avg=21622.73, stdev=11688.34 00:25:53.324 clat (usec): min=708, max=7751, avg=4183.08, stdev=563.49 00:25:53.324 lat (usec): min=725, max=7766, avg=4204.70, stdev=563.27 00:25:53.324 clat percentiles (usec): 00:25:53.324 | 1.00th=[ 2442], 5.00th=[ 3490], 10.00th=[ 3785], 20.00th=[ 3982], 00:25:53.324 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4228], 00:25:53.324 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5080], 00:25:53.324 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7504], 00:25:53.324 | 99.99th=[ 7767] 00:25:53.324 bw ( KiB/s): min=14480, max=15376, per=24.84%, avg=14997.33, stdev=248.64, samples=9 00:25:53.324 iops : min= 1810, max= 1922, avg=1874.67, stdev=31.08, samples=9 00:25:53.324 lat (usec) : 750=0.01%, 1000=0.03% 00:25:53.324 lat (msec) : 2=0.54%, 4=20.70%, 10=78.71% 00:25:53.324 cpu : usr=95.66%, sys=3.86%, ctx=6, majf=0, minf=0 00:25:53.324 IO depths : 1=0.2%, 2=17.4%, 4=55.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 issued rwts: total=9385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.324 filename1: (groupid=0, jobs=1): err= 0: pid=1319746: Mon Jul 15 10:42:41 2024 00:25:53.324 read: IOPS=1903, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5002msec) 00:25:53.324 slat (nsec): min=5145, max=73328, avg=15968.66, stdev=10309.55 00:25:53.324 clat (usec): min=710, max=7376, avg=4151.33, stdev=493.51 00:25:53.324 lat (usec): min=718, max=7395, avg=4167.30, stdev=493.57 00:25:53.324 clat percentiles (usec): 00:25:53.324 | 1.00th=[ 2671], 5.00th=[ 3359], 10.00th=[ 3687], 20.00th=[ 3949], 00:25:53.324 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:25:53.324 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4817], 00:25:53.324 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 7308], 00:25:53.324 | 99.99th=[ 7373] 00:25:53.324 bw ( KiB/s): min=14784, max=16016, per=25.19%, avg=15210.67, stdev=397.75, samples=9 00:25:53.324 iops : min= 1848, max= 2002, avg=1901.33, stdev=49.72, samples=9 00:25:53.324 lat (usec) : 750=0.02%, 1000=0.03% 00:25:53.324 lat (msec) : 2=0.25%, 4=22.80%, 10=76.90% 00:25:53.324 cpu : usr=95.00%, sys=4.54%, ctx=10, majf=0, minf=2 00:25:53.324 IO depths : 1=0.5%, 2=11.5%, 4=61.0%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 issued rwts: total=9519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.324 filename1: (groupid=0, jobs=1): err= 0: pid=1319747: Mon Jul 15 10:42:41 2024 00:25:53.324 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5001msec) 00:25:53.324 slat (nsec): min=4511, max=67529, avg=22523.29, stdev=9598.28 00:25:53.324 clat (usec): min=838, max=7716, avg=4139.61, stdev=498.71 00:25:53.324 lat (usec): min=872, max=7726, avg=4162.13, stdev=498.52 00:25:53.324 clat percentiles (usec): 00:25:53.324 | 1.00th=[ 2606], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 3949], 00:25:53.324 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:25:53.324 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4883], 00:25:53.324 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 7177], 99.95th=[ 7439], 00:25:53.324 | 99.99th=[ 7701] 00:25:53.324 bw ( KiB/s): min=14797, max=15360, per=25.08%, avg=15144.56, stdev=181.58, samples=9 00:25:53.324 iops : min= 1849, max= 1920, avg=1893.00, stdev=22.85, samples=9 00:25:53.324 lat (usec) : 1000=0.03% 00:25:53.324 lat (msec) : 2=0.47%, 4=23.41%, 10=76.08% 00:25:53.324 cpu : usr=94.14%, sys=4.76%, ctx=208, majf=0, minf=9 00:25:53.324 IO depths : 1=0.3%, 2=18.7%, 4=54.9%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.324 issued rwts: total=9477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.324 00:25:53.324 Run status group 0 (all jobs): 00:25:53.324 READ: bw=59.0MiB/s (61.8MB/s), 14.6MiB/s-14.9MiB/s (15.4MB/s-15.6MB/s), io=295MiB (309MB), run=5001-5002msec 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 00:25:53.324 real 0m24.172s 00:25:53.324 user 4m34.728s 00:25:53.324 sys 0m6.259s 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 ************************************ 00:25:53.324 END TEST fio_dif_rand_params 00:25:53.324 ************************************ 00:25:53.324 10:42:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:53.324 10:42:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:53.324 10:42:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:53.324 10:42:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 ************************************ 00:25:53.324 START TEST fio_dif_digest 00:25:53.324 ************************************ 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 bdev_null0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:53.324 [2024-07-15 10:42:41.693913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.324 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:53.325 { 00:25:53.325 "params": { 00:25:53.325 "name": "Nvme$subsystem", 00:25:53.325 "trtype": "$TEST_TRANSPORT", 00:25:53.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.325 "adrfam": "ipv4", 00:25:53.325 "trsvcid": "$NVMF_PORT", 00:25:53.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.325 "hdgst": ${hdgst:-false}, 00:25:53.325 "ddgst": ${ddgst:-false} 00:25:53.325 }, 00:25:53.325 "method": "bdev_nvme_attach_controller" 00:25:53.325 } 00:25:53.325 EOF 00:25:53.325 )") 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:53.325 "params": { 00:25:53.325 "name": "Nvme0", 00:25:53.325 "trtype": "tcp", 00:25:53.325 "traddr": "10.0.0.2", 00:25:53.325 "adrfam": "ipv4", 00:25:53.325 "trsvcid": "4420", 00:25:53.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:53.325 "hdgst": true, 00:25:53.325 "ddgst": true 00:25:53.325 }, 00:25:53.325 "method": "bdev_nvme_attach_controller" 00:25:53.325 }' 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:53.325 10:42:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.582 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:53.582 ... 00:25:53.582 fio-3.35 00:25:53.582 Starting 3 threads 00:25:53.582 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.772 00:26:05.772 filename0: (groupid=0, jobs=1): err= 0: pid=1320502: Mon Jul 15 10:42:52 2024 00:26:05.772 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(281MiB/10047msec) 00:26:05.772 slat (nsec): min=4480, max=64088, avg=18016.72, stdev=7221.00 00:26:05.772 clat (usec): min=10187, max=54996, avg=13371.72, stdev=2976.71 00:26:05.772 lat (usec): min=10202, max=55014, avg=13389.74, stdev=2976.62 00:26:05.772 clat percentiles (usec): 00:26:05.772 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:26:05.772 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:26:05.772 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:26:05.772 | 99.00th=[15664], 99.50th=[20579], 99.90th=[54264], 99.95th=[54264], 00:26:05.772 | 99.99th=[54789] 00:26:05.772 bw ( KiB/s): min=27136, max=29440, per=39.30%, avg=28736.00, stdev=704.15, samples=20 00:26:05.772 iops : min= 212, max= 230, avg=224.50, stdev= 5.50, samples=20 00:26:05.772 lat (msec) : 20=99.42%, 50=0.13%, 100=0.45% 00:26:05.772 cpu : usr=93.67%, sys=5.29%, ctx=169, majf=0, minf=200 00:26:05.772 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.772 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.772 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.772 filename0: (groupid=0, jobs=1): err= 0: pid=1320503: Mon Jul 15 10:42:52 2024 00:26:05.772 read: IOPS=176, BW=22.0MiB/s (23.1MB/s)(222MiB/10047msec) 00:26:05.772 slat (nsec): min=4372, max=57734, avg=16191.33, stdev=4071.75 00:26:05.772 clat (usec): min=7804, max=51757, avg=16966.23, stdev=2008.04 00:26:05.772 lat (usec): min=7824, max=51776, avg=16982.42, stdev=2007.91 00:26:05.772 clat percentiles (usec): 00:26:05.772 | 1.00th=[ 9765], 5.00th=[14746], 10.00th=[15401], 20.00th=[15926], 00:26:05.772 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:26:05.772 | 70.00th=[17695], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:26:05.772 | 99.00th=[20579], 99.50th=[21103], 99.90th=[48497], 99.95th=[51643], 00:26:05.772 | 99.99th=[51643] 00:26:05.772 bw ( KiB/s): min=21504, max=24576, per=30.97%, avg=22645.35, stdev=850.03, samples=20 00:26:05.772 iops : min= 168, max= 192, avg=176.90, stdev= 6.66, samples=20 00:26:05.772 lat (msec) : 10=1.30%, 20=96.67%, 50=1.98%, 100=0.06% 00:26:05.772 cpu : usr=95.08%, sys=4.47%, ctx=27, majf=0, minf=232 00:26:05.772 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.772 issued rwts: total=1772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.772 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.772 filename0: (groupid=0, jobs=1): err= 0: pid=1320504: Mon Jul 15 10:42:52 2024 00:26:05.772 read: IOPS=171, BW=21.4MiB/s (22.5MB/s)(215MiB/10047msec) 00:26:05.772 slat (nsec): min=4566, max=41016, avg=18094.28, stdev=3511.66 00:26:05.773 clat (usec): min=8984, max=54290, avg=17467.29, stdev=1945.81 00:26:05.773 lat (usec): min=9025, max=54309, avg=17485.39, stdev=1945.77 00:26:05.773 clat percentiles (usec): 00:26:05.773 | 1.00th=[10683], 5.00th=[15401], 10.00th=[15926], 20.00th=[16450], 00:26:05.773 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:26:05.773 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19006], 95.00th=[19530], 00:26:05.773 | 99.00th=[21103], 99.50th=[21890], 99.90th=[49021], 99.95th=[54264], 00:26:05.773 | 99.99th=[54264] 00:26:05.773 bw ( KiB/s): min=20992, max=23296, per=30.09%, avg=22003.20, stdev=584.21, samples=20 00:26:05.773 iops : min= 164, max= 182, avg=171.90, stdev= 4.56, samples=20 00:26:05.773 lat (msec) : 10=0.17%, 20=96.40%, 50=3.37%, 100=0.06% 00:26:05.773 cpu : usr=95.28%, sys=4.26%, ctx=32, majf=0, minf=173 00:26:05.773 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.773 issued rwts: total=1721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.773 00:26:05.773 Run status group 0 (all jobs): 00:26:05.773 READ: bw=71.4MiB/s (74.9MB/s), 21.4MiB/s-28.0MiB/s (22.5MB/s-29.3MB/s), io=718MiB (752MB), run=10047-10047msec 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.773 00:26:05.773 real 0m11.171s 00:26:05.773 user 0m29.656s 00:26:05.773 sys 0m1.700s 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.773 10:42:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:05.773 ************************************ 00:26:05.773 END TEST fio_dif_digest 00:26:05.773 ************************************ 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:05.773 10:42:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:05.773 10:42:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.773 rmmod nvme_tcp 00:26:05.773 rmmod nvme_fabrics 00:26:05.773 rmmod nvme_keyring 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1313824 ']' 00:26:05.773 10:42:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1313824 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1313824 ']' 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1313824 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1313824 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1313824' 00:26:05.773 killing process with pid 1313824 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1313824 00:26:05.773 10:42:52 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1313824 00:26:05.773 10:42:53 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:05.773 10:42:53 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:05.773 Waiting for block devices as requested 00:26:06.032 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:06.032 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:06.032 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:06.550 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:06.550 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:06.550 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:06.810 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:06.810 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:06.810 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:06.810 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:07.069 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:07.069 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:07.069 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:07.331 10:42:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.331 10:42:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.331 10:42:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.331 10:42:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.331 10:42:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.331 10:42:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:07.331 10:42:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.288 10:42:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.288 00:26:09.288 real 1m6.998s 00:26:09.288 user 6m30.896s 00:26:09.288 sys 0m17.880s 00:26:09.288 10:42:57 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.288 10:42:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:09.288 ************************************ 00:26:09.288 END TEST nvmf_dif 00:26:09.288 ************************************ 00:26:09.288 10:42:57 -- common/autotest_common.sh@1142 -- # return 0 00:26:09.288 10:42:57 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:09.288 10:42:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:09.288 10:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.288 10:42:57 -- common/autotest_common.sh@10 -- # set +x 00:26:09.288 ************************************ 00:26:09.288 START TEST nvmf_abort_qd_sizes 00:26:09.288 ************************************ 00:26:09.288 10:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:09.547 * Looking for test storage... 00:26:09.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:09.547 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.548 10:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:09.548 10:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.548 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:09.548 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:09.548 10:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.548 10:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:11.458 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.458 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:11.459 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:11.459 Found net devices under 0000:09:00.0: cvl_0_0 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:11.459 Found net devices under 0000:09:00.1: cvl_0_1 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:11.459 10:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:11.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:26:11.721 00:26:11.721 --- 10.0.0.2 ping statistics --- 00:26:11.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.721 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:26:11.721 00:26:11.721 --- 10.0.0.1 ping statistics --- 00:26:11.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.721 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:11.721 10:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:12.654 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:12.654 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:12.654 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:12.913 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:12.913 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:12.913 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:12.913 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:12.913 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:12.913 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:13.847 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1325415 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1325415 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1325415 ']' 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.105 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:14.105 [2024-07-15 10:43:02.480265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:14.105 [2024-07-15 10:43:02.480338] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.105 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.105 [2024-07-15 10:43:02.542416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.105 [2024-07-15 10:43:02.648529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.105 [2024-07-15 10:43:02.648606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.105 [2024-07-15 10:43:02.648619] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.105 [2024-07-15 10:43:02.648630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.105 [2024-07-15 10:43:02.648639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.105 [2024-07-15 10:43:02.648756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.105 [2024-07-15 10:43:02.648826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.105 [2024-07-15 10:43:02.648890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.105 [2024-07-15 10:43:02.648893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:14.361 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.362 10:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:14.362 ************************************ 00:26:14.362 START TEST spdk_target_abort 00:26:14.362 ************************************ 00:26:14.362 10:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:14.362 10:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:14.362 10:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:26:14.362 10:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.362 10:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:17.634 spdk_targetn1 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:17.634 [2024-07-15 10:43:05.651586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:17.634 [2024-07-15 10:43:05.683808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:17.634 10:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:17.634 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.902 Initializing NVMe Controllers 00:26:20.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:20.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:20.902 Initialization complete. Launching workers. 00:26:20.902 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13072, failed: 0 00:26:20.902 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1200, failed to submit 11872 00:26:20.902 success 739, unsuccess 461, failed 0 00:26:20.902 10:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:20.902 10:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:20.902 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.183 Initializing NVMe Controllers 00:26:24.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:24.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:24.183 Initialization complete. Launching workers. 00:26:24.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8691, failed: 0 00:26:24.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1272, failed to submit 7419 00:26:24.183 success 308, unsuccess 964, failed 0 00:26:24.183 10:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:24.183 10:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:24.183 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.460 Initializing NVMe Controllers 00:26:27.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:27.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:27.460 Initialization complete. Launching workers. 00:26:27.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31851, failed: 0 00:26:27.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2801, failed to submit 29050 00:26:27.460 success 541, unsuccess 2260, failed 0 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.460 10:43:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.394 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.394 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1325415 00:26:28.394 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1325415 ']' 00:26:28.394 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1325415 00:26:28.394 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:26:28.394 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1325415 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1325415' 00:26:28.395 killing process with pid 1325415 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1325415 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1325415 00:26:28.395 00:26:28.395 real 0m14.121s 00:26:28.395 user 0m53.571s 00:26:28.395 sys 0m2.464s 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.395 10:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.395 ************************************ 00:26:28.395 END TEST spdk_target_abort 00:26:28.395 ************************************ 00:26:28.653 10:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:26:28.653 10:43:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:28.653 10:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:28.653 10:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.653 10:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:28.653 ************************************ 00:26:28.653 START TEST kernel_target_abort 00:26:28.653 ************************************ 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:28.653 10:43:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:28.653 10:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:28.653 10:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:29.587 Waiting for block devices as requested 00:26:29.587 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:29.845 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:29.845 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:29.845 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:30.103 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:30.103 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:30.103 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:30.361 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:30.361 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:30.361 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:30.619 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:30.619 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:30.619 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:30.619 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:30.877 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:30.877 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:30.877 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:31.135 No valid GPT data, bailing 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:31.135 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:31.136 00:26:31.136 Discovery Log Number of Records 2, Generation counter 2 00:26:31.136 =====Discovery Log Entry 0====== 00:26:31.136 trtype: tcp 00:26:31.136 adrfam: ipv4 00:26:31.136 subtype: current discovery subsystem 00:26:31.136 treq: not specified, sq flow control disable supported 00:26:31.136 portid: 1 00:26:31.136 trsvcid: 4420 00:26:31.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:31.136 traddr: 10.0.0.1 00:26:31.136 eflags: none 00:26:31.136 sectype: none 00:26:31.136 =====Discovery Log Entry 1====== 00:26:31.136 trtype: tcp 00:26:31.136 adrfam: ipv4 00:26:31.136 subtype: nvme subsystem 00:26:31.136 treq: not specified, sq flow control disable supported 00:26:31.136 portid: 1 00:26:31.136 trsvcid: 4420 00:26:31.136 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:31.136 traddr: 10.0.0.1 00:26:31.136 eflags: none 00:26:31.136 sectype: none 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:31.136 10:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:31.136 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.438 Initializing NVMe Controllers 00:26:34.438 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:34.438 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:34.438 Initialization complete. Launching workers. 00:26:34.438 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55209, failed: 0 00:26:34.438 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55209, failed to submit 0 00:26:34.438 success 0, unsuccess 55209, failed 0 00:26:34.438 10:43:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:34.438 10:43:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:34.438 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.714 Initializing NVMe Controllers 00:26:37.714 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:37.714 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:37.714 Initialization complete. Launching workers. 00:26:37.714 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102721, failed: 0 00:26:37.714 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25914, failed to submit 76807 00:26:37.714 success 0, unsuccess 25914, failed 0 00:26:37.714 10:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:37.714 10:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:37.714 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.051 Initializing NVMe Controllers 00:26:41.051 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:41.051 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:41.051 Initialization complete. Launching workers. 00:26:41.051 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98526, failed: 0 00:26:41.051 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24618, failed to submit 73908 00:26:41.051 success 0, unsuccess 24618, failed 0 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:41.051 10:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:41.051 10:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.982 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:41.982 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:41.982 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:42.916 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:43.173 00:26:43.173 real 0m14.516s 00:26:43.173 user 0m6.657s 00:26:43.173 sys 0m3.225s 00:26:43.173 10:43:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:43.173 10:43:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:43.173 ************************************ 00:26:43.173 END TEST kernel_target_abort 00:26:43.173 ************************************ 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:43.173 rmmod nvme_tcp 00:26:43.173 rmmod nvme_fabrics 00:26:43.173 rmmod nvme_keyring 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1325415 ']' 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1325415 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1325415 ']' 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1325415 00:26:43.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1325415) - No such process 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1325415 is not found' 00:26:43.173 Process with pid 1325415 is not found 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:43.173 10:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:44.106 Waiting for block devices as requested 00:26:44.363 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:44.363 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:44.363 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:44.622 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:44.622 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:44.622 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:44.879 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:44.879 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:44.879 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:45.134 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:45.134 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:45.134 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:45.391 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:45.391 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:45.391 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:45.391 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:45.648 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:45.648 10:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.175 10:43:36 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.175 00:26:48.175 real 0m38.340s 00:26:48.175 user 1m2.479s 00:26:48.175 sys 0m9.077s 00:26:48.175 10:43:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.175 10:43:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:48.175 ************************************ 00:26:48.175 END TEST nvmf_abort_qd_sizes 00:26:48.175 ************************************ 00:26:48.175 10:43:36 -- common/autotest_common.sh@1142 -- # return 0 00:26:48.175 10:43:36 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:48.175 10:43:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:48.175 10:43:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.175 10:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:48.175 ************************************ 00:26:48.175 START TEST keyring_file 00:26:48.175 ************************************ 00:26:48.175 10:43:36 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:48.175 * Looking for test storage... 00:26:48.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.175 10:43:36 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.175 10:43:36 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.175 10:43:36 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.175 10:43:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.175 10:43:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.175 10:43:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.175 10:43:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:48.175 10:43:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@47 -- # : 0 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NIOOFxgfNJ 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:48.175 10:43:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NIOOFxgfNJ 00:26:48.175 10:43:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NIOOFxgfNJ 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NIOOFxgfNJ 00:26:48.175 10:43:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1SjQddPOk8 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:48.176 10:43:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:48.176 10:43:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:48.176 10:43:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:48.176 10:43:36 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:48.176 10:43:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:48.176 10:43:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1SjQddPOk8 00:26:48.176 10:43:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1SjQddPOk8 00:26:48.176 10:43:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1SjQddPOk8 00:26:48.176 10:43:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=1331189 00:26:48.176 10:43:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:48.176 10:43:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1331189 00:26:48.176 10:43:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1331189 ']' 00:26:48.176 10:43:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.176 10:43:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.176 10:43:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.176 10:43:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.176 10:43:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:48.176 [2024-07-15 10:43:36.380727] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:48.176 [2024-07-15 10:43:36.380851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331189 ] 00:26:48.176 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.176 [2024-07-15 10:43:36.442015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.176 [2024-07-15 10:43:36.547129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.434 10:43:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.434 10:43:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:26:48.434 10:43:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:48.434 10:43:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.434 10:43:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:48.434 [2024-07-15 10:43:36.764416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.434 null0 00:26:48.434 [2024-07-15 10:43:36.796466] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:48.434 [2024-07-15 10:43:36.796924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:48.434 [2024-07-15 10:43:36.804477] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.435 10:43:36 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:48.435 [2024-07-15 10:43:36.812506] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:48.435 request: 00:26:48.435 { 00:26:48.435 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:48.435 "secure_channel": false, 00:26:48.435 "listen_address": { 00:26:48.435 "trtype": "tcp", 00:26:48.435 "traddr": "127.0.0.1", 00:26:48.435 "trsvcid": "4420" 00:26:48.435 }, 00:26:48.435 "method": "nvmf_subsystem_add_listener", 00:26:48.435 "req_id": 1 00:26:48.435 } 00:26:48.435 Got JSON-RPC error response 00:26:48.435 response: 00:26:48.435 { 00:26:48.435 "code": -32602, 00:26:48.435 "message": "Invalid parameters" 00:26:48.435 } 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.435 10:43:36 keyring_file -- keyring/file.sh@46 -- # bperfpid=1331194 00:26:48.435 10:43:36 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:48.435 10:43:36 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1331194 /var/tmp/bperf.sock 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1331194 ']' 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.435 10:43:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:48.435 [2024-07-15 10:43:36.857259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:48.435 [2024-07-15 10:43:36.857328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331194 ] 00:26:48.435 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.435 [2024-07-15 10:43:36.911978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.693 [2024-07-15 10:43:37.017362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.693 10:43:37 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.693 10:43:37 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:26:48.693 10:43:37 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:48.693 10:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:48.951 10:43:37 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1SjQddPOk8 00:26:48.951 10:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1SjQddPOk8 00:26:49.208 10:43:37 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:26:49.208 10:43:37 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:26:49.208 10:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:49.208 10:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:49.208 10:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:49.465 10:43:37 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.NIOOFxgfNJ == \/\t\m\p\/\t\m\p\.\N\I\O\O\F\x\g\f\N\J ]] 00:26:49.465 10:43:37 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:26:49.465 10:43:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:49.465 10:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:49.465 10:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:49.465 10:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:49.722 10:43:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1SjQddPOk8 == \/\t\m\p\/\t\m\p\.\1\S\j\Q\d\d\P\O\k\8 ]] 00:26:49.722 10:43:38 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:26:49.722 10:43:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:49.722 10:43:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:49.722 10:43:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:49.722 10:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:49.722 10:43:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:49.980 10:43:38 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:49.980 10:43:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:26:49.980 10:43:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:49.980 10:43:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:49.980 10:43:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:49.980 10:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:49.980 10:43:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:50.238 10:43:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:50.238 10:43:38 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:50.238 10:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:50.495 [2024-07-15 10:43:38.845424] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:50.495 nvme0n1 00:26:50.495 10:43:38 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:26:50.495 10:43:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:50.495 10:43:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:50.495 10:43:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:50.495 10:43:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:50.495 10:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:50.753 10:43:39 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:50.753 10:43:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:26:50.753 10:43:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:50.753 10:43:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:50.753 10:43:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:50.753 10:43:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:50.753 10:43:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:51.010 10:43:39 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:26:51.010 10:43:39 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.010 Running I/O for 1 seconds... 00:26:52.382 00:26:52.382 Latency(us) 00:26:52.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.382 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:52.382 nvme0n1 : 1.01 9948.20 38.86 0.00 0.00 12811.39 7427.41 24175.50 00:26:52.382 =================================================================================================================== 00:26:52.382 Total : 9948.20 38.86 0.00 0.00 12811.39 7427.41 24175.50 00:26:52.382 0 00:26:52.382 10:43:40 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:52.382 10:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:52.382 10:43:40 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:26:52.382 10:43:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:52.382 10:43:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:52.382 10:43:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:52.382 10:43:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:52.382 10:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:52.640 10:43:41 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:26:52.640 10:43:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:26:52.640 10:43:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:52.640 10:43:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:52.640 10:43:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:52.640 10:43:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:52.640 10:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:52.898 10:43:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:52.898 10:43:41 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:52.898 10:43:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:52.898 10:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:53.156 [2024-07-15 10:43:41.523630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:53.156 [2024-07-15 10:43:41.523894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141a9a0 (107): Transport endpoint is not connected 00:26:53.156 [2024-07-15 10:43:41.524886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141a9a0 (9): Bad file descriptor 00:26:53.156 [2024-07-15 10:43:41.525886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:53.156 [2024-07-15 10:43:41.525906] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:53.156 [2024-07-15 10:43:41.525920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:53.156 request: 00:26:53.156 { 00:26:53.156 "name": "nvme0", 00:26:53.156 "trtype": "tcp", 00:26:53.156 "traddr": "127.0.0.1", 00:26:53.156 "adrfam": "ipv4", 00:26:53.156 "trsvcid": "4420", 00:26:53.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:53.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:53.156 "prchk_reftag": false, 00:26:53.156 "prchk_guard": false, 00:26:53.156 "hdgst": false, 00:26:53.156 "ddgst": false, 00:26:53.156 "psk": "key1", 00:26:53.156 "method": "bdev_nvme_attach_controller", 00:26:53.156 "req_id": 1 00:26:53.156 } 00:26:53.156 Got JSON-RPC error response 00:26:53.156 response: 00:26:53.156 { 00:26:53.156 "code": -5, 00:26:53.156 "message": "Input/output error" 00:26:53.156 } 00:26:53.156 10:43:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:53.156 10:43:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:53.156 10:43:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:53.156 10:43:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:53.156 10:43:41 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:26:53.156 10:43:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:53.156 10:43:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.156 10:43:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.156 10:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.156 10:43:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:53.414 10:43:41 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:26:53.414 10:43:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:26:53.414 10:43:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:53.414 10:43:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.414 10:43:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.414 10:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.414 10:43:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:53.671 10:43:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:53.671 10:43:42 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:26:53.671 10:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:53.928 10:43:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:26:53.928 10:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:54.185 10:43:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:26:54.185 10:43:42 keyring_file -- keyring/file.sh@77 -- # jq length 00:26:54.185 10:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:54.443 10:43:42 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:54.443 10:43:42 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.NIOOFxgfNJ 00:26:54.443 10:43:42 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:54.443 10:43:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:54.443 10:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:54.700 [2024-07-15 10:43:43.023943] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NIOOFxgfNJ': 0100660 00:26:54.700 [2024-07-15 10:43:43.023974] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:54.700 request: 00:26:54.700 { 00:26:54.700 "name": "key0", 00:26:54.700 "path": "/tmp/tmp.NIOOFxgfNJ", 00:26:54.700 "method": "keyring_file_add_key", 00:26:54.700 "req_id": 1 00:26:54.700 } 00:26:54.700 Got JSON-RPC error response 00:26:54.700 response: 00:26:54.700 { 00:26:54.700 "code": -1, 00:26:54.700 "message": "Operation not permitted" 00:26:54.700 } 00:26:54.700 10:43:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:54.700 10:43:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:54.700 10:43:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:54.700 10:43:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:54.700 10:43:43 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.NIOOFxgfNJ 00:26:54.700 10:43:43 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:54.700 10:43:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIOOFxgfNJ 00:26:54.956 10:43:43 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.NIOOFxgfNJ 00:26:54.956 10:43:43 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:26:54.956 10:43:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:54.956 10:43:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:54.956 10:43:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:54.956 10:43:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:54.956 10:43:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:55.212 10:43:43 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:55.212 10:43:43 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.212 10:43:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:55.212 10:43:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.213 10:43:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:55.213 10:43:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:55.213 10:43:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:55.213 10:43:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:55.213 10:43:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.213 10:43:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.213 [2024-07-15 10:43:43.757940] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NIOOFxgfNJ': No such file or directory 00:26:55.213 [2024-07-15 10:43:43.757970] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:55.213 [2024-07-15 10:43:43.758004] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:55.213 [2024-07-15 10:43:43.758015] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:55.213 [2024-07-15 10:43:43.758025] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:55.213 request: 00:26:55.213 { 00:26:55.213 "name": "nvme0", 00:26:55.213 "trtype": "tcp", 00:26:55.213 "traddr": "127.0.0.1", 00:26:55.213 "adrfam": "ipv4", 00:26:55.213 "trsvcid": "4420", 00:26:55.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:55.213 "prchk_reftag": false, 00:26:55.213 "prchk_guard": false, 00:26:55.213 "hdgst": false, 00:26:55.213 "ddgst": false, 00:26:55.213 "psk": "key0", 00:26:55.213 "method": "bdev_nvme_attach_controller", 00:26:55.213 "req_id": 1 00:26:55.213 } 00:26:55.213 Got JSON-RPC error response 00:26:55.213 response: 00:26:55.213 { 00:26:55.213 "code": -19, 00:26:55.213 "message": "No such device" 00:26:55.213 } 00:26:55.470 10:43:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:55.470 10:43:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:55.470 10:43:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:55.470 10:43:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:55.470 10:43:43 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:55.470 10:43:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:55.728 10:43:44 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dEe5cb1k2v 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:55.728 10:43:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:55.728 10:43:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:55.728 10:43:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:55.728 10:43:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:55.728 10:43:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:55.728 10:43:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dEe5cb1k2v 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dEe5cb1k2v 00:26:55.728 10:43:44 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dEe5cb1k2v 00:26:55.728 10:43:44 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dEe5cb1k2v 00:26:55.728 10:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dEe5cb1k2v 00:26:55.985 10:43:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.985 10:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:56.243 nvme0n1 00:26:56.243 10:43:44 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:26:56.243 10:43:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:56.243 10:43:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:56.243 10:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.243 10:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.243 10:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:56.500 10:43:44 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:56.500 10:43:44 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:56.500 10:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:56.758 10:43:45 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:26:56.758 10:43:45 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:26:56.758 10:43:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.758 10:43:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:56.758 10:43:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.016 10:43:45 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:26:57.016 10:43:45 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:26:57.016 10:43:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:57.016 10:43:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:57.016 10:43:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:57.016 10:43:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.016 10:43:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:57.273 10:43:45 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:26:57.273 10:43:45 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:57.273 10:43:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:57.530 10:43:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:26:57.530 10:43:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.530 10:43:45 keyring_file -- keyring/file.sh@104 -- # jq length 00:26:57.787 10:43:46 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:26:57.787 10:43:46 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dEe5cb1k2v 00:26:57.787 10:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dEe5cb1k2v 00:26:58.043 10:43:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1SjQddPOk8 00:26:58.043 10:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1SjQddPOk8 00:26:58.301 10:43:46 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:58.301 10:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:58.557 nvme0n1 00:26:58.557 10:43:46 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:26:58.557 10:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:58.814 10:43:47 keyring_file -- keyring/file.sh@112 -- # config='{ 00:26:58.814 "subsystems": [ 00:26:58.814 { 00:26:58.814 "subsystem": "keyring", 00:26:58.814 "config": [ 00:26:58.814 { 00:26:58.814 "method": "keyring_file_add_key", 00:26:58.814 "params": { 00:26:58.814 "name": "key0", 00:26:58.814 "path": "/tmp/tmp.dEe5cb1k2v" 00:26:58.814 } 00:26:58.814 }, 00:26:58.814 { 00:26:58.814 "method": "keyring_file_add_key", 00:26:58.814 "params": { 00:26:58.814 "name": "key1", 00:26:58.814 "path": "/tmp/tmp.1SjQddPOk8" 00:26:58.814 } 00:26:58.814 } 00:26:58.814 ] 00:26:58.814 }, 00:26:58.814 { 00:26:58.814 "subsystem": "iobuf", 00:26:58.814 "config": [ 00:26:58.814 { 00:26:58.814 "method": "iobuf_set_options", 00:26:58.814 "params": { 00:26:58.814 "small_pool_count": 8192, 00:26:58.814 "large_pool_count": 1024, 00:26:58.814 "small_bufsize": 8192, 00:26:58.814 "large_bufsize": 135168 00:26:58.814 } 00:26:58.814 } 00:26:58.814 ] 00:26:58.814 }, 00:26:58.814 { 00:26:58.814 "subsystem": "sock", 00:26:58.814 "config": [ 00:26:58.814 { 00:26:58.814 "method": "sock_set_default_impl", 00:26:58.814 "params": { 00:26:58.814 "impl_name": "posix" 00:26:58.814 } 00:26:58.814 }, 00:26:58.814 { 00:26:58.814 "method": "sock_impl_set_options", 00:26:58.814 "params": { 00:26:58.814 "impl_name": "ssl", 00:26:58.814 "recv_buf_size": 4096, 00:26:58.814 "send_buf_size": 4096, 00:26:58.814 "enable_recv_pipe": true, 00:26:58.814 "enable_quickack": false, 00:26:58.814 "enable_placement_id": 0, 00:26:58.814 "enable_zerocopy_send_server": true, 00:26:58.814 "enable_zerocopy_send_client": false, 00:26:58.814 "zerocopy_threshold": 0, 00:26:58.814 "tls_version": 0, 00:26:58.814 "enable_ktls": false 00:26:58.814 } 00:26:58.814 }, 00:26:58.814 { 00:26:58.814 "method": "sock_impl_set_options", 00:26:58.814 "params": { 00:26:58.814 "impl_name": "posix", 00:26:58.814 "recv_buf_size": 2097152, 00:26:58.814 "send_buf_size": 2097152, 00:26:58.814 "enable_recv_pipe": true, 00:26:58.814 "enable_quickack": false, 00:26:58.814 "enable_placement_id": 0, 00:26:58.814 "enable_zerocopy_send_server": true, 00:26:58.814 "enable_zerocopy_send_client": false, 00:26:58.814 "zerocopy_threshold": 0, 00:26:58.814 "tls_version": 0, 00:26:58.814 "enable_ktls": false 00:26:58.815 } 00:26:58.815 } 00:26:58.815 ] 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "subsystem": "vmd", 00:26:58.815 "config": [] 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "subsystem": "accel", 00:26:58.815 "config": [ 00:26:58.815 { 00:26:58.815 "method": "accel_set_options", 00:26:58.815 "params": { 00:26:58.815 "small_cache_size": 128, 00:26:58.815 "large_cache_size": 16, 00:26:58.815 "task_count": 2048, 00:26:58.815 "sequence_count": 2048, 00:26:58.815 "buf_count": 2048 00:26:58.815 } 00:26:58.815 } 00:26:58.815 ] 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "subsystem": "bdev", 00:26:58.815 "config": [ 00:26:58.815 { 00:26:58.815 "method": "bdev_set_options", 00:26:58.815 "params": { 00:26:58.815 "bdev_io_pool_size": 65535, 00:26:58.815 "bdev_io_cache_size": 256, 00:26:58.815 "bdev_auto_examine": true, 00:26:58.815 "iobuf_small_cache_size": 128, 00:26:58.815 "iobuf_large_cache_size": 16 00:26:58.815 } 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "method": "bdev_raid_set_options", 00:26:58.815 "params": { 00:26:58.815 "process_window_size_kb": 1024 00:26:58.815 } 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "method": "bdev_iscsi_set_options", 00:26:58.815 "params": { 00:26:58.815 "timeout_sec": 30 00:26:58.815 } 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "method": "bdev_nvme_set_options", 00:26:58.815 "params": { 00:26:58.815 "action_on_timeout": "none", 00:26:58.815 "timeout_us": 0, 00:26:58.815 "timeout_admin_us": 0, 00:26:58.815 "keep_alive_timeout_ms": 10000, 00:26:58.815 "arbitration_burst": 0, 00:26:58.815 "low_priority_weight": 0, 00:26:58.815 "medium_priority_weight": 0, 00:26:58.815 "high_priority_weight": 0, 00:26:58.815 "nvme_adminq_poll_period_us": 10000, 00:26:58.815 "nvme_ioq_poll_period_us": 0, 00:26:58.815 "io_queue_requests": 512, 00:26:58.815 "delay_cmd_submit": true, 00:26:58.815 "transport_retry_count": 4, 00:26:58.815 "bdev_retry_count": 3, 00:26:58.815 "transport_ack_timeout": 0, 00:26:58.815 "ctrlr_loss_timeout_sec": 0, 00:26:58.815 "reconnect_delay_sec": 0, 00:26:58.815 "fast_io_fail_timeout_sec": 0, 00:26:58.815 "disable_auto_failback": false, 00:26:58.815 "generate_uuids": false, 00:26:58.815 "transport_tos": 0, 00:26:58.815 "nvme_error_stat": false, 00:26:58.815 "rdma_srq_size": 0, 00:26:58.815 "io_path_stat": false, 00:26:58.815 "allow_accel_sequence": false, 00:26:58.815 "rdma_max_cq_size": 0, 00:26:58.815 "rdma_cm_event_timeout_ms": 0, 00:26:58.815 "dhchap_digests": [ 00:26:58.815 "sha256", 00:26:58.815 "sha384", 00:26:58.815 "sha512" 00:26:58.815 ], 00:26:58.815 "dhchap_dhgroups": [ 00:26:58.815 "null", 00:26:58.815 "ffdhe2048", 00:26:58.815 "ffdhe3072", 00:26:58.815 "ffdhe4096", 00:26:58.815 "ffdhe6144", 00:26:58.815 "ffdhe8192" 00:26:58.815 ] 00:26:58.815 } 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "method": "bdev_nvme_attach_controller", 00:26:58.815 "params": { 00:26:58.815 "name": "nvme0", 00:26:58.815 "trtype": "TCP", 00:26:58.815 "adrfam": "IPv4", 00:26:58.815 "traddr": "127.0.0.1", 00:26:58.815 "trsvcid": "4420", 00:26:58.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.815 "prchk_reftag": false, 00:26:58.815 "prchk_guard": false, 00:26:58.815 "ctrlr_loss_timeout_sec": 0, 00:26:58.815 "reconnect_delay_sec": 0, 00:26:58.815 "fast_io_fail_timeout_sec": 0, 00:26:58.815 "psk": "key0", 00:26:58.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:58.815 "hdgst": false, 00:26:58.815 "ddgst": false 00:26:58.815 } 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "method": "bdev_nvme_set_hotplug", 00:26:58.815 "params": { 00:26:58.815 "period_us": 100000, 00:26:58.815 "enable": false 00:26:58.815 } 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "method": "bdev_wait_for_examine" 00:26:58.815 } 00:26:58.815 ] 00:26:58.815 }, 00:26:58.815 { 00:26:58.815 "subsystem": "nbd", 00:26:58.815 "config": [] 00:26:58.815 } 00:26:58.815 ] 00:26:58.815 }' 00:26:58.815 10:43:47 keyring_file -- keyring/file.sh@114 -- # killprocess 1331194 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1331194 ']' 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1331194 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1331194 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1331194' 00:26:58.815 killing process with pid 1331194 00:26:58.815 10:43:47 keyring_file -- common/autotest_common.sh@967 -- # kill 1331194 00:26:58.815 Received shutdown signal, test time was about 1.000000 seconds 00:26:58.815 00:26:58.815 Latency(us) 00:26:58.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.816 =================================================================================================================== 00:26:58.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.816 10:43:47 keyring_file -- common/autotest_common.sh@972 -- # wait 1331194 00:26:59.073 10:43:47 keyring_file -- keyring/file.sh@117 -- # bperfpid=1332657 00:26:59.073 10:43:47 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1332657 /var/tmp/bperf.sock 00:26:59.073 10:43:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1332657 ']' 00:26:59.073 10:43:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.073 10:43:47 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:59.073 10:43:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.074 10:43:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.074 10:43:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.074 10:43:47 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:26:59.074 "subsystems": [ 00:26:59.074 { 00:26:59.074 "subsystem": "keyring", 00:26:59.074 "config": [ 00:26:59.074 { 00:26:59.074 "method": "keyring_file_add_key", 00:26:59.074 "params": { 00:26:59.074 "name": "key0", 00:26:59.074 "path": "/tmp/tmp.dEe5cb1k2v" 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "keyring_file_add_key", 00:26:59.074 "params": { 00:26:59.074 "name": "key1", 00:26:59.074 "path": "/tmp/tmp.1SjQddPOk8" 00:26:59.074 } 00:26:59.074 } 00:26:59.074 ] 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "subsystem": "iobuf", 00:26:59.074 "config": [ 00:26:59.074 { 00:26:59.074 "method": "iobuf_set_options", 00:26:59.074 "params": { 00:26:59.074 "small_pool_count": 8192, 00:26:59.074 "large_pool_count": 1024, 00:26:59.074 "small_bufsize": 8192, 00:26:59.074 "large_bufsize": 135168 00:26:59.074 } 00:26:59.074 } 00:26:59.074 ] 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "subsystem": "sock", 00:26:59.074 "config": [ 00:26:59.074 { 00:26:59.074 "method": "sock_set_default_impl", 00:26:59.074 "params": { 00:26:59.074 "impl_name": "posix" 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "sock_impl_set_options", 00:26:59.074 "params": { 00:26:59.074 "impl_name": "ssl", 00:26:59.074 "recv_buf_size": 4096, 00:26:59.074 "send_buf_size": 4096, 00:26:59.074 "enable_recv_pipe": true, 00:26:59.074 "enable_quickack": false, 00:26:59.074 "enable_placement_id": 0, 00:26:59.074 "enable_zerocopy_send_server": true, 00:26:59.074 "enable_zerocopy_send_client": false, 00:26:59.074 "zerocopy_threshold": 0, 00:26:59.074 "tls_version": 0, 00:26:59.074 "enable_ktls": false 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "sock_impl_set_options", 00:26:59.074 "params": { 00:26:59.074 "impl_name": "posix", 00:26:59.074 "recv_buf_size": 2097152, 00:26:59.074 "send_buf_size": 2097152, 00:26:59.074 "enable_recv_pipe": true, 00:26:59.074 "enable_quickack": false, 00:26:59.074 "enable_placement_id": 0, 00:26:59.074 "enable_zerocopy_send_server": true, 00:26:59.074 "enable_zerocopy_send_client": false, 00:26:59.074 "zerocopy_threshold": 0, 00:26:59.074 "tls_version": 0, 00:26:59.074 "enable_ktls": false 00:26:59.074 } 00:26:59.074 } 00:26:59.074 ] 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "subsystem": "vmd", 00:26:59.074 "config": [] 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "subsystem": "accel", 00:26:59.074 "config": [ 00:26:59.074 { 00:26:59.074 "method": "accel_set_options", 00:26:59.074 "params": { 00:26:59.074 "small_cache_size": 128, 00:26:59.074 "large_cache_size": 16, 00:26:59.074 "task_count": 2048, 00:26:59.074 "sequence_count": 2048, 00:26:59.074 "buf_count": 2048 00:26:59.074 } 00:26:59.074 } 00:26:59.074 ] 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "subsystem": "bdev", 00:26:59.074 "config": [ 00:26:59.074 { 00:26:59.074 "method": "bdev_set_options", 00:26:59.074 "params": { 00:26:59.074 "bdev_io_pool_size": 65535, 00:26:59.074 "bdev_io_cache_size": 256, 00:26:59.074 "bdev_auto_examine": true, 00:26:59.074 "iobuf_small_cache_size": 128, 00:26:59.074 "iobuf_large_cache_size": 16 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "bdev_raid_set_options", 00:26:59.074 "params": { 00:26:59.074 "process_window_size_kb": 1024 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "bdev_iscsi_set_options", 00:26:59.074 "params": { 00:26:59.074 "timeout_sec": 30 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "bdev_nvme_set_options", 00:26:59.074 "params": { 00:26:59.074 "action_on_timeout": "none", 00:26:59.074 "timeout_us": 0, 00:26:59.074 "timeout_admin_us": 0, 00:26:59.074 "keep_alive_timeout_ms": 10000, 00:26:59.074 "arbitration_burst": 0, 00:26:59.074 "low_priority_weight": 0, 00:26:59.074 "medium_priority_weight": 0, 00:26:59.074 "high_priority_weight": 0, 00:26:59.074 "nvme_adminq_poll_period_us": 10000, 00:26:59.074 "nvme_ioq_poll_period_us": 0, 00:26:59.074 "io_queue_requests": 512, 00:26:59.074 "delay_cmd_submit": true, 00:26:59.074 "transport_retry_count": 4, 00:26:59.074 "bdev_retry_count": 3, 00:26:59.074 "transport_ack_timeout": 0, 00:26:59.074 "ctrlr_loss_timeout_sec": 0, 00:26:59.074 "reconnect_delay_sec": 0, 00:26:59.074 "fast_io_fail_timeout_sec": 0, 00:26:59.074 "disable_auto_failback": false, 00:26:59.074 "generate_uuids": false, 00:26:59.074 "transport_tos": 0, 00:26:59.074 "nvme_error_stat": false, 00:26:59.074 "rdma_srq_size": 0, 00:26:59.074 "io_path_stat": false, 00:26:59.074 "allow_accel_sequence": false, 00:26:59.074 "rdma_max_cq_size": 0, 00:26:59.074 "rdma_cm_event_timeout_ms": 0, 00:26:59.074 "dhchap_digests": [ 00:26:59.074 "sha256", 00:26:59.074 "sha384", 00:26:59.074 "sha512" 00:26:59.074 ], 00:26:59.074 "dhchap_dhgroups": [ 00:26:59.074 "null", 00:26:59.074 "ffdhe2048", 00:26:59.074 "ffdhe3072", 00:26:59.074 "ffdhe4096", 00:26:59.074 "ffdhe6144", 00:26:59.074 "ffdhe8192" 00:26:59.074 ] 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "bdev_nvme_attach_controller", 00:26:59.074 "params": { 00:26:59.074 "name": "nvme0", 00:26:59.074 "trtype": "TCP", 00:26:59.074 "adrfam": "IPv4", 00:26:59.074 "traddr": "127.0.0.1", 00:26:59.074 "trsvcid": "4420", 00:26:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.074 "prchk_reftag": false, 00:26:59.074 "prchk_guard": false, 00:26:59.074 "ctrlr_loss_timeout_sec": 0, 00:26:59.074 "reconnect_delay_sec": 0, 00:26:59.074 "fast_io_fail_timeout_sec": 0, 00:26:59.074 "psk": "key0", 00:26:59.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:59.074 "hdgst": false, 00:26:59.074 "ddgst": false 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "bdev_nvme_set_hotplug", 00:26:59.074 "params": { 00:26:59.074 "period_us": 100000, 00:26:59.074 "enable": false 00:26:59.074 } 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "method": "bdev_wait_for_examine" 00:26:59.074 } 00:26:59.074 ] 00:26:59.074 }, 00:26:59.074 { 00:26:59.074 "subsystem": "nbd", 00:26:59.074 "config": [] 00:26:59.074 } 00:26:59.074 ] 00:26:59.074 }' 00:26:59.074 10:43:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:59.074 [2024-07-15 10:43:47.510889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:59.074 [2024-07-15 10:43:47.510993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332657 ] 00:26:59.074 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.074 [2024-07-15 10:43:47.568445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.331 [2024-07-15 10:43:47.677556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.331 [2024-07-15 10:43:47.850835] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:00.263 10:43:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.263 10:43:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:00.263 10:43:48 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:00.263 10:43:48 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:00.263 10:43:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:00.263 10:43:48 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:00.263 10:43:48 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:00.263 10:43:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:00.263 10:43:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:00.263 10:43:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:00.263 10:43:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:00.263 10:43:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:00.520 10:43:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:00.520 10:43:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:00.520 10:43:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:00.520 10:43:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:00.520 10:43:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:00.520 10:43:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:00.520 10:43:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:00.777 10:43:49 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:00.777 10:43:49 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:00.777 10:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:00.777 10:43:49 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:01.032 10:43:49 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:01.032 10:43:49 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:01.032 10:43:49 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dEe5cb1k2v /tmp/tmp.1SjQddPOk8 00:27:01.032 10:43:49 keyring_file -- keyring/file.sh@20 -- # killprocess 1332657 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1332657 ']' 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1332657 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1332657 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1332657' 00:27:01.032 killing process with pid 1332657 00:27:01.032 10:43:49 keyring_file -- common/autotest_common.sh@967 -- # kill 1332657 00:27:01.032 Received shutdown signal, test time was about 1.000000 seconds 00:27:01.032 00:27:01.032 Latency(us) 00:27:01.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.032 =================================================================================================================== 00:27:01.032 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:01.033 10:43:49 keyring_file -- common/autotest_common.sh@972 -- # wait 1332657 00:27:01.318 10:43:49 keyring_file -- keyring/file.sh@21 -- # killprocess 1331189 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1331189 ']' 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1331189 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1331189 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1331189' 00:27:01.318 killing process with pid 1331189 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@967 -- # kill 1331189 00:27:01.318 [2024-07-15 10:43:49.752382] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:01.318 10:43:49 keyring_file -- common/autotest_common.sh@972 -- # wait 1331189 00:27:01.908 00:27:01.908 real 0m13.975s 00:27:01.908 user 0m35.115s 00:27:01.908 sys 0m3.181s 00:27:01.908 10:43:50 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.908 10:43:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:01.908 ************************************ 00:27:01.908 END TEST keyring_file 00:27:01.908 ************************************ 00:27:01.908 10:43:50 -- common/autotest_common.sh@1142 -- # return 0 00:27:01.908 10:43:50 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:01.908 10:43:50 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:01.908 10:43:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:01.908 10:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.908 10:43:50 -- common/autotest_common.sh@10 -- # set +x 00:27:01.908 ************************************ 00:27:01.908 START TEST keyring_linux 00:27:01.908 ************************************ 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:01.908 * Looking for test storage... 00:27:01.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.908 10:43:50 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.908 10:43:50 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.908 10:43:50 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.908 10:43:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.908 10:43:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.908 10:43:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.908 10:43:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:01.908 10:43:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:01.908 /tmp/:spdk-test:key0 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:01.908 10:43:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:01.908 10:43:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:01.908 /tmp/:spdk-test:key1 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1333020 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:01.908 10:43:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1333020 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1333020 ']' 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.908 10:43:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:01.908 [2024-07-15 10:43:50.425584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:01.908 [2024-07-15 10:43:50.425681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333020 ] 00:27:01.908 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.166 [2024-07-15 10:43:50.483666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.166 [2024-07-15 10:43:50.593523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:02.449 10:43:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:02.449 [2024-07-15 10:43:50.829969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.449 null0 00:27:02.449 [2024-07-15 10:43:50.862001] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:02.449 [2024-07-15 10:43:50.862456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.449 10:43:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:02.449 809392447 00:27:02.449 10:43:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:02.449 593434124 00:27:02.449 10:43:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1333148 00:27:02.449 10:43:50 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:02.449 10:43:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1333148 /var/tmp/bperf.sock 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1333148 ']' 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.449 10:43:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:02.449 [2024-07-15 10:43:50.923799] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:02.449 [2024-07-15 10:43:50.923886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333148 ] 00:27:02.449 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.449 [2024-07-15 10:43:50.979373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.707 [2024-07-15 10:43:51.086010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.707 10:43:51 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.707 10:43:51 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:02.707 10:43:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:02.707 10:43:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:02.965 10:43:51 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:02.965 10:43:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:03.223 10:43:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:03.223 10:43:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:03.481 [2024-07-15 10:43:51.918378] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:03.481 nvme0n1 00:27:03.481 10:43:52 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:03.481 10:43:52 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:03.481 10:43:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:03.481 10:43:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:03.481 10:43:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:03.481 10:43:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.739 10:43:52 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:03.739 10:43:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:03.739 10:43:52 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:03.739 10:43:52 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:03.739 10:43:52 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.739 10:43:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.739 10:43:52 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@25 -- # sn=809392447 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@26 -- # [[ 809392447 == \8\0\9\3\9\2\4\4\7 ]] 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 809392447 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:03.996 10:43:52 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.254 Running I/O for 1 seconds... 00:27:05.186 00:27:05.186 Latency(us) 00:27:05.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.186 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:05.186 nvme0n1 : 1.01 10821.87 42.27 0.00 0.00 11749.00 8349.77 20194.80 00:27:05.186 =================================================================================================================== 00:27:05.186 Total : 10821.87 42.27 0.00 0.00 11749.00 8349.77 20194.80 00:27:05.186 0 00:27:05.186 10:43:53 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:05.186 10:43:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:05.443 10:43:53 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:05.443 10:43:53 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:05.443 10:43:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:05.443 10:43:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:05.443 10:43:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:05.443 10:43:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.699 10:43:54 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:05.699 10:43:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:05.699 10:43:54 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:05.699 10:43:54 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.699 10:43:54 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:05.699 10:43:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:05.956 [2024-07-15 10:43:54.366073] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:05.956 [2024-07-15 10:43:54.366473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13803f0 (107): Transport endpoint is not connected 00:27:05.956 [2024-07-15 10:43:54.367467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13803f0 (9): Bad file descriptor 00:27:05.956 [2024-07-15 10:43:54.368467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:05.956 [2024-07-15 10:43:54.368493] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:05.956 [2024-07-15 10:43:54.368521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:05.956 request: 00:27:05.956 { 00:27:05.956 "name": "nvme0", 00:27:05.956 "trtype": "tcp", 00:27:05.956 "traddr": "127.0.0.1", 00:27:05.956 "adrfam": "ipv4", 00:27:05.956 "trsvcid": "4420", 00:27:05.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.956 "prchk_reftag": false, 00:27:05.956 "prchk_guard": false, 00:27:05.956 "hdgst": false, 00:27:05.956 "ddgst": false, 00:27:05.956 "psk": ":spdk-test:key1", 00:27:05.956 "method": "bdev_nvme_attach_controller", 00:27:05.956 "req_id": 1 00:27:05.956 } 00:27:05.956 Got JSON-RPC error response 00:27:05.956 response: 00:27:05.956 { 00:27:05.956 "code": -5, 00:27:05.956 "message": "Input/output error" 00:27:05.956 } 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@33 -- # sn=809392447 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 809392447 00:27:05.956 1 links removed 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@33 -- # sn=593434124 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 593434124 00:27:05.956 1 links removed 00:27:05.956 10:43:54 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1333148 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1333148 ']' 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1333148 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1333148 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1333148' 00:27:05.956 killing process with pid 1333148 00:27:05.956 10:43:54 keyring_linux -- common/autotest_common.sh@967 -- # kill 1333148 00:27:05.956 Received shutdown signal, test time was about 1.000000 seconds 00:27:05.957 00:27:05.957 Latency(us) 00:27:05.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.957 =================================================================================================================== 00:27:05.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.957 10:43:54 keyring_linux -- common/autotest_common.sh@972 -- # wait 1333148 00:27:06.214 10:43:54 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1333020 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1333020 ']' 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1333020 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1333020 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1333020' 00:27:06.214 killing process with pid 1333020 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@967 -- # kill 1333020 00:27:06.214 10:43:54 keyring_linux -- common/autotest_common.sh@972 -- # wait 1333020 00:27:06.780 00:27:06.780 real 0m4.943s 00:27:06.780 user 0m9.611s 00:27:06.780 sys 0m1.545s 00:27:06.780 10:43:55 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:06.780 10:43:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:06.780 ************************************ 00:27:06.780 END TEST keyring_linux 00:27:06.780 ************************************ 00:27:06.780 10:43:55 -- common/autotest_common.sh@1142 -- # return 0 00:27:06.780 10:43:55 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:06.780 10:43:55 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:06.780 10:43:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:06.780 10:43:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:06.780 10:43:55 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:06.780 10:43:55 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:06.780 10:43:55 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:06.780 10:43:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.780 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:27:06.780 10:43:55 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:06.780 10:43:55 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:06.780 10:43:55 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:06.780 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:27:08.680 INFO: APP EXITING 00:27:08.680 INFO: killing all VMs 00:27:08.680 INFO: killing vhost app 00:27:08.680 INFO: EXIT DONE 00:27:09.616 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:09.616 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:09.616 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:09.616 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:09.616 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:09.616 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:09.616 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:09.616 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:09.616 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:27:09.874 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:09.874 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:09.874 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:09.874 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:09.874 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:09.874 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:09.874 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:09.874 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:11.245 Cleaning 00:27:11.245 Removing: /var/run/dpdk/spdk0/config 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:11.245 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:11.245 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:11.245 Removing: /var/run/dpdk/spdk1/config 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:11.245 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:11.245 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:11.245 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:11.245 Removing: /var/run/dpdk/spdk2/config 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:11.245 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:11.245 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:11.245 Removing: /var/run/dpdk/spdk3/config 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:11.245 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:11.245 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:11.245 Removing: /var/run/dpdk/spdk4/config 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:11.245 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:11.245 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:11.245 Removing: /dev/shm/bdev_svc_trace.1 00:27:11.245 Removing: /dev/shm/nvmf_trace.0 00:27:11.245 Removing: /dev/shm/spdk_tgt_trace.pid1076189 00:27:11.245 Removing: /var/run/dpdk/spdk0 00:27:11.245 Removing: /var/run/dpdk/spdk1 00:27:11.245 Removing: /var/run/dpdk/spdk2 00:27:11.245 Removing: /var/run/dpdk/spdk3 00:27:11.245 Removing: /var/run/dpdk/spdk4 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1074649 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1075376 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1076189 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1076628 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1077335 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1077475 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1078187 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1078199 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1078441 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1079634 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1080668 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1080867 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1081052 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1081329 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1081534 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1081719 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1081873 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1082066 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1082399 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1085344 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1085506 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1085675 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1085682 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1086108 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1086112 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1086538 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1086552 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1086833 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1086852 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1087014 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1087134 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1087514 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1087670 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1087869 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1088037 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1088173 00:27:11.245 Removing: /var/run/dpdk/spdk_pid1088250 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1088448 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1088677 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1088836 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1088994 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1089261 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1089423 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1089581 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1089849 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1090009 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1090170 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1090382 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1090595 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1090761 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1090915 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1091190 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1091351 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1091514 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1091785 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1091946 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1092106 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1092288 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1092494 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1094625 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1120875 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1123409 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1130377 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1133677 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1135918 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1136441 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1140279 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1144127 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1144130 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1144793 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1145332 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1145988 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1146384 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1146392 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1146652 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1146718 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1146788 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1147337 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1147989 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1148643 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1149042 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1149056 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1149272 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1150214 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1151030 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1156776 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1157049 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1159559 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1163346 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1165433 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1171811 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1177018 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1178213 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1178992 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1189697 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1191919 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1216326 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1219109 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1220291 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1221507 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1221630 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1221768 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1221907 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1222333 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1223539 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1224256 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1224571 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1226192 00:27:11.246 Removing: /var/run/dpdk/spdk_pid1226610 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1227169 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1229684 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1235528 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1238251 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1242225 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1243563 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1244721 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1247349 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1249583 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1253789 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1253795 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1256684 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1256823 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1256965 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1257227 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1257232 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1259992 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1260327 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1262984 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1264925 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1268258 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1271585 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1277921 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1282750 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1282754 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1294560 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1294970 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1295497 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1295901 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1296482 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1296900 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1297317 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1297732 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1300219 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1300482 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1304275 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1304333 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1306063 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1310971 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1310977 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1313871 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1315397 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1317304 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1318160 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1319570 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1320445 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1325770 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1326115 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1326507 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1328058 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1328400 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1328737 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1331189 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1331194 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1332657 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1333020 00:27:11.542 Removing: /var/run/dpdk/spdk_pid1333148 00:27:11.542 Clean 00:27:11.542 10:43:59 -- common/autotest_common.sh@1451 -- # return 0 00:27:11.542 10:43:59 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:11.542 10:43:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.542 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:27:11.542 10:44:00 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:11.542 10:44:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.542 10:44:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.542 10:44:00 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:11.542 10:44:00 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:11.542 10:44:00 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:11.542 10:44:00 -- spdk/autotest.sh@391 -- # hash lcov 00:27:11.542 10:44:00 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:11.542 10:44:00 -- spdk/autotest.sh@393 -- # hostname 00:27:11.542 10:44:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:11.798 geninfo: WARNING: invalid characters removed from testname! 00:27:43.906 10:44:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:43.906 10:44:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:46.442 10:44:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:49.720 10:44:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:52.245 10:44:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:55.570 10:44:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:58.098 10:44:46 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:58.099 10:44:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.099 10:44:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:58.099 10:44:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.099 10:44:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.099 10:44:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.099 10:44:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.099 10:44:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.099 10:44:46 -- paths/export.sh@5 -- $ export PATH 00:27:58.099 10:44:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.099 10:44:46 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:27:58.099 10:44:46 -- common/autobuild_common.sh@444 -- $ date +%s 00:27:58.099 10:44:46 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721033086.XXXXXX 00:27:58.099 10:44:46 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721033086.mGRU0M 00:27:58.099 10:44:46 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:27:58.099 10:44:46 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:27:58.099 10:44:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:27:58.099 10:44:46 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:58.099 10:44:46 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:58.099 10:44:46 -- common/autobuild_common.sh@460 -- $ get_config_params 00:27:58.099 10:44:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:27:58.099 10:44:46 -- common/autotest_common.sh@10 -- $ set +x 00:27:58.099 10:44:46 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:27:58.099 10:44:46 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:27:58.099 10:44:46 -- pm/common@17 -- $ local monitor 00:27:58.099 10:44:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:58.099 10:44:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:58.099 10:44:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:58.099 10:44:46 -- pm/common@21 -- $ date +%s 00:27:58.099 10:44:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:58.099 10:44:46 -- pm/common@21 -- $ date +%s 00:27:58.099 10:44:46 -- pm/common@25 -- $ sleep 1 00:27:58.099 10:44:46 -- pm/common@21 -- $ date +%s 00:27:58.099 10:44:46 -- pm/common@21 -- $ date +%s 00:27:58.099 10:44:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721033086 00:27:58.099 10:44:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721033086 00:27:58.099 10:44:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721033086 00:27:58.099 10:44:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721033086 00:27:58.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721033086_collect-vmstat.pm.log 00:27:58.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721033086_collect-cpu-load.pm.log 00:27:58.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721033086_collect-cpu-temp.pm.log 00:27:58.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721033086_collect-bmc-pm.bmc.pm.log 00:27:59.040 10:44:47 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:27:59.040 10:44:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:27:59.040 10:44:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:59.040 10:44:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:59.040 10:44:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:59.040 10:44:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:59.040 10:44:47 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:59.040 10:44:47 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:59.040 10:44:47 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:59.040 10:44:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:59.040 10:44:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:59.040 10:44:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:59.040 10:44:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:59.040 10:44:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:59.040 10:44:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:59.040 10:44:47 -- pm/common@44 -- $ pid=1342723 00:27:59.040 10:44:47 -- pm/common@50 -- $ kill -TERM 1342723 00:27:59.040 10:44:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:59.040 10:44:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:59.040 10:44:47 -- pm/common@44 -- $ pid=1342725 00:27:59.040 10:44:47 -- pm/common@50 -- $ kill -TERM 1342725 00:27:59.040 10:44:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:59.040 10:44:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:59.040 10:44:47 -- pm/common@44 -- $ pid=1342727 00:27:59.040 10:44:47 -- pm/common@50 -- $ kill -TERM 1342727 00:27:59.040 10:44:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:59.040 10:44:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:59.040 10:44:47 -- pm/common@44 -- $ pid=1342758 00:27:59.040 10:44:47 -- pm/common@50 -- $ sudo -E kill -TERM 1342758 00:27:59.040 + [[ -n 990967 ]] 00:27:59.040 + sudo kill 990967 00:27:59.052 [Pipeline] } 00:27:59.074 [Pipeline] // stage 00:27:59.080 [Pipeline] } 00:27:59.105 [Pipeline] // timeout 00:27:59.113 [Pipeline] } 00:27:59.134 [Pipeline] // catchError 00:27:59.140 [Pipeline] } 00:27:59.161 [Pipeline] // wrap 00:27:59.169 [Pipeline] } 00:27:59.188 [Pipeline] // catchError 00:27:59.199 [Pipeline] stage 00:27:59.201 [Pipeline] { (Epilogue) 00:27:59.217 [Pipeline] catchError 00:27:59.219 [Pipeline] { 00:27:59.237 [Pipeline] echo 00:27:59.240 Cleanup processes 00:27:59.250 [Pipeline] sh 00:27:59.541 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:59.541 1342862 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:27:59.541 1342989 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:59.555 [Pipeline] sh 00:27:59.841 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:59.841 ++ awk '{print $1}' 00:27:59.841 ++ grep -v 'sudo pgrep' 00:27:59.841 + sudo kill -9 1342862 00:27:59.854 [Pipeline] sh 00:28:00.139 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:08.278 [Pipeline] sh 00:28:08.573 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:08.573 Artifacts sizes are good 00:28:08.589 [Pipeline] archiveArtifacts 00:28:08.597 Archiving artifacts 00:28:08.837 [Pipeline] sh 00:28:09.126 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:09.141 [Pipeline] cleanWs 00:28:09.152 [WS-CLEANUP] Deleting project workspace... 00:28:09.152 [WS-CLEANUP] Deferred wipeout is used... 00:28:09.160 [WS-CLEANUP] done 00:28:09.163 [Pipeline] } 00:28:09.185 [Pipeline] // catchError 00:28:09.199 [Pipeline] sh 00:28:09.481 + logger -p user.info -t JENKINS-CI 00:28:09.489 [Pipeline] } 00:28:09.507 [Pipeline] // stage 00:28:09.514 [Pipeline] } 00:28:09.535 [Pipeline] // node 00:28:09.541 [Pipeline] End of Pipeline 00:28:09.576 Finished: SUCCESS